Sep 12 23:53:02.251379 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 23:53:02.251430 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:53:02.251459 kernel: KASLR disabled due to lack of seed Sep 12 23:53:02.251476 kernel: efi: EFI v2.7 by EDK II Sep 12 23:53:02.251493 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 23:53:02.251509 kernel: ACPI: Early table checksum verification disabled Sep 12 23:53:02.251582 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 23:53:02.251602 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 23:53:02.251620 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 23:53:02.251637 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 23:53:02.251662 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 23:53:02.251679 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 23:53:02.251695 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 23:53:02.251711 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 23:53:02.251730 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 23:53:02.251752 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 23:53:02.251770 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 23:53:02.251787 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 23:53:02.251804 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 23:53:02.251821 kernel: printk: bootconsole [uart0] enabled Sep 12 23:53:02.251837 kernel: NUMA: Failed to initialise from firmware Sep 12 23:53:02.251855 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:53:02.251871 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 23:53:02.251888 kernel: Zone ranges: Sep 12 23:53:02.251905 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 23:53:02.251922 kernel: DMA32 empty Sep 12 23:53:02.251944 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 23:53:02.251962 kernel: Movable zone start for each node Sep 12 23:53:02.251978 kernel: Early memory node ranges Sep 12 23:53:02.251995 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 23:53:02.252011 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 23:53:02.252028 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 23:53:02.252044 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 23:53:02.252061 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 23:53:02.252078 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 23:53:02.252094 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 23:53:02.252111 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 23:53:02.252127 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:53:02.252148 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 23:53:02.252166 kernel: psci: probing for conduit method from ACPI. Sep 12 23:53:02.252190 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 23:53:02.252208 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:53:02.252226 kernel: psci: Trusted OS migration not required Sep 12 23:53:02.252248 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:53:02.252267 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 23:53:02.252285 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:53:02.252303 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:53:02.256122 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:53:02.256213 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:53:02.256250 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:53:02.256271 kernel: CPU features: detected: Spectre-v2 Sep 12 23:53:02.256291 kernel: CPU features: detected: Spectre-v3a Sep 12 23:53:02.256310 kernel: CPU features: detected: Spectre-BHB Sep 12 23:53:02.258543 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 23:53:02.258588 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 23:53:02.258607 kernel: alternatives: applying boot alternatives Sep 12 23:53:02.258627 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:53:02.258647 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:53:02.258666 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:53:02.258684 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:53:02.258702 kernel: Fallback order for Node 0: 0 Sep 12 23:53:02.258720 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 23:53:02.258737 kernel: Policy zone: Normal Sep 12 23:53:02.258754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:53:02.258772 kernel: software IO TLB: area num 2. Sep 12 23:53:02.258795 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 23:53:02.258814 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 23:53:02.258832 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:53:02.258849 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:53:02.258868 kernel: rcu: RCU event tracing is enabled. Sep 12 23:53:02.258887 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:53:02.258905 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:53:02.258923 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:53:02.258942 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:53:02.258960 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:53:02.258979 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:53:02.259002 kernel: GICv3: 96 SPIs implemented Sep 12 23:53:02.259020 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:53:02.259038 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:53:02.259056 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 23:53:02.259073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 23:53:02.259091 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 23:53:02.259109 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:53:02.259127 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:53:02.259144 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 23:53:02.259162 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 23:53:02.259180 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 23:53:02.259198 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:53:02.259221 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 23:53:02.259239 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 23:53:02.259257 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 23:53:02.259276 kernel: Console: colour dummy device 80x25 Sep 12 23:53:02.259294 kernel: printk: console [tty1] enabled Sep 12 23:53:02.259312 kernel: ACPI: Core revision 20230628 Sep 12 23:53:02.259354 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 23:53:02.259376 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:53:02.259394 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:53:02.259422 kernel: landlock: Up and running. Sep 12 23:53:02.259441 kernel: SELinux: Initializing. Sep 12 23:53:02.259459 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:53:02.259478 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:53:02.259497 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:53:02.259515 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:53:02.259555 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:53:02.259574 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:53:02.259593 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 23:53:02.259617 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 23:53:02.259637 kernel: Remapping and enabling EFI services. Sep 12 23:53:02.259655 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:53:02.259674 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:53:02.259692 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 23:53:02.259711 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 23:53:02.259730 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 23:53:02.259749 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:53:02.259768 kernel: SMP: Total of 2 processors activated. Sep 12 23:53:02.259811 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:53:02.259857 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 23:53:02.259897 kernel: CPU features: detected: CRC32 instructions Sep 12 23:53:02.259955 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:53:02.259984 kernel: alternatives: applying system-wide alternatives Sep 12 23:53:02.260003 kernel: devtmpfs: initialized Sep 12 23:53:02.260023 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:53:02.260042 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:53:02.260062 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:53:02.260082 kernel: SMBIOS 3.0.0 present. Sep 12 23:53:02.260106 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 23:53:02.260125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:53:02.260144 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:53:02.260163 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:53:02.260182 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:53:02.260201 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:53:02.260220 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Sep 12 23:53:02.260244 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:53:02.260264 kernel: cpuidle: using governor menu Sep 12 23:53:02.260292 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:53:02.260312 kernel: ASID allocator initialised with 65536 entries Sep 12 23:53:02.267434 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:53:02.267498 kernel: Serial: AMBA PL011 UART driver Sep 12 23:53:02.267534 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 23:53:02.267560 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:53:02.267580 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:53:02.267612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:53:02.267632 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:53:02.267650 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:53:02.267670 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:53:02.267689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:53:02.267708 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:53:02.267726 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:53:02.267745 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:53:02.267763 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:53:02.267788 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:53:02.267807 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:53:02.267826 kernel: ACPI: Interpreter enabled Sep 12 23:53:02.267844 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:53:02.267863 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:53:02.267882 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 23:53:02.268205 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:53:02.268459 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:53:02.268683 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:53:02.268894 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 23:53:02.269105 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 23:53:02.269131 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 23:53:02.269150 kernel: acpiphp: Slot [1] registered Sep 12 23:53:02.269169 kernel: acpiphp: Slot [2] registered Sep 12 23:53:02.269187 kernel: acpiphp: Slot [3] registered Sep 12 23:53:02.269206 kernel: acpiphp: Slot [4] registered Sep 12 23:53:02.269231 kernel: acpiphp: Slot [5] registered Sep 12 23:53:02.269250 kernel: acpiphp: Slot [6] registered Sep 12 23:53:02.269269 kernel: acpiphp: Slot [7] registered Sep 12 23:53:02.269288 kernel: acpiphp: Slot [8] registered Sep 12 23:53:02.269306 kernel: acpiphp: Slot [9] registered Sep 12 23:53:02.269796 kernel: acpiphp: Slot [10] registered Sep 12 23:53:02.269827 kernel: acpiphp: Slot [11] registered Sep 12 23:53:02.269847 kernel: acpiphp: Slot [12] registered Sep 12 23:53:02.269866 kernel: acpiphp: Slot [13] registered Sep 12 23:53:02.269885 kernel: acpiphp: Slot [14] registered Sep 12 23:53:02.269913 kernel: acpiphp: Slot [15] registered Sep 12 23:53:02.269933 kernel: acpiphp: Slot [16] registered Sep 12 23:53:02.269952 kernel: acpiphp: Slot [17] registered Sep 12 23:53:02.269971 kernel: acpiphp: Slot [18] registered Sep 12 23:53:02.269990 kernel: acpiphp: Slot [19] registered Sep 12 23:53:02.270009 kernel: acpiphp: Slot [20] registered Sep 12 23:53:02.270028 kernel: acpiphp: Slot [21] registered Sep 12 23:53:02.270047 kernel: acpiphp: Slot [22] registered Sep 12 23:53:02.270065 kernel: acpiphp: Slot [23] registered Sep 12 23:53:02.270089 kernel: acpiphp: Slot [24] registered Sep 12 23:53:02.270109 kernel: acpiphp: Slot [25] registered Sep 12 23:53:02.270128 kernel: acpiphp: Slot [26] registered Sep 12 23:53:02.270147 kernel: acpiphp: Slot [27] registered Sep 12 23:53:02.270165 kernel: acpiphp: Slot [28] registered Sep 12 23:53:02.270184 kernel: acpiphp: Slot [29] registered Sep 12 23:53:02.270204 kernel: acpiphp: Slot [30] registered Sep 12 23:53:02.270223 kernel: acpiphp: Slot [31] registered Sep 12 23:53:02.270243 kernel: PCI host bridge to bus 0000:00 Sep 12 23:53:02.270589 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 23:53:02.270834 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:53:02.271055 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 23:53:02.271260 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 23:53:02.273766 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 23:53:02.274044 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 23:53:02.274280 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 23:53:02.274592 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 23:53:02.274840 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 23:53:02.275079 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:53:02.280171 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 23:53:02.280548 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 23:53:02.280791 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 23:53:02.281032 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 23:53:02.281250 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:53:02.281510 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 23:53:02.281741 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 23:53:02.281969 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 23:53:02.282184 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 23:53:02.284508 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 23:53:02.284748 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 23:53:02.284951 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:53:02.285171 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 23:53:02.285203 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:53:02.285226 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:53:02.285247 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:53:02.285268 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:53:02.285287 kernel: iommu: Default domain type: Translated Sep 12 23:53:02.287280 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:53:02.287387 kernel: efivars: Registered efivars operations Sep 12 23:53:02.287414 kernel: vgaarb: loaded Sep 12 23:53:02.287439 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:53:02.287460 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:53:02.287480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:53:02.287499 kernel: pnp: PnP ACPI init Sep 12 23:53:02.287831 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 23:53:02.287869 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:53:02.287902 kernel: NET: Registered PF_INET protocol family Sep 12 23:53:02.287922 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:53:02.287942 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:53:02.287963 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:53:02.287983 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:53:02.288002 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:53:02.288021 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:53:02.288040 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:53:02.288059 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:53:02.288086 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:53:02.288106 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:53:02.288125 kernel: kvm [1]: HYP mode not available Sep 12 23:53:02.288144 kernel: Initialise system trusted keyrings Sep 12 23:53:02.288164 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:53:02.288182 kernel: Key type asymmetric registered Sep 12 23:53:02.288201 kernel: Asymmetric key parser 'x509' registered Sep 12 23:53:02.288220 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:53:02.288239 kernel: io scheduler mq-deadline registered Sep 12 23:53:02.288263 kernel: io scheduler kyber registered Sep 12 23:53:02.288282 kernel: io scheduler bfq registered Sep 12 23:53:02.288682 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 23:53:02.288716 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:53:02.288736 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:53:02.288755 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 23:53:02.288774 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 23:53:02.288793 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:53:02.288822 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 23:53:02.289044 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 23:53:02.289071 kernel: printk: console [ttyS0] disabled Sep 12 23:53:02.289091 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 23:53:02.289110 kernel: printk: console [ttyS0] enabled Sep 12 23:53:02.289129 kernel: printk: bootconsole [uart0] disabled Sep 12 23:53:02.289148 kernel: thunder_xcv, ver 1.0 Sep 12 23:53:02.289167 kernel: thunder_bgx, ver 1.0 Sep 12 23:53:02.289186 kernel: nicpf, ver 1.0 Sep 12 23:53:02.289211 kernel: nicvf, ver 1.0 Sep 12 23:53:02.289465 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:53:02.289671 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:53:01 UTC (1757721181) Sep 12 23:53:02.289698 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:53:02.289718 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 23:53:02.289737 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:53:02.289756 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:53:02.289775 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:53:02.289802 kernel: Segment Routing with IPv6 Sep 12 23:53:02.289822 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:53:02.289841 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:53:02.289860 kernel: Key type dns_resolver registered Sep 12 23:53:02.289880 kernel: registered taskstats version 1 Sep 12 23:53:02.289900 kernel: Loading compiled-in X.509 certificates Sep 12 23:53:02.289920 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:53:02.289940 kernel: Key type .fscrypt registered Sep 12 23:53:02.289959 kernel: Key type fscrypt-provisioning registered Sep 12 23:53:02.289985 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:53:02.290004 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:53:02.290023 kernel: ima: No architecture policies found Sep 12 23:53:02.290042 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:53:02.290061 kernel: clk: Disabling unused clocks Sep 12 23:53:02.290079 kernel: Freeing unused kernel memory: 39488K Sep 12 23:53:02.290098 kernel: Run /init as init process Sep 12 23:53:02.290116 kernel: with arguments: Sep 12 23:53:02.290135 kernel: /init Sep 12 23:53:02.290153 kernel: with environment: Sep 12 23:53:02.290177 kernel: HOME=/ Sep 12 23:53:02.290196 kernel: TERM=linux Sep 12 23:53:02.290214 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:53:02.290238 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:53:02.290264 systemd[1]: Detected virtualization amazon. Sep 12 23:53:02.290285 systemd[1]: Detected architecture arm64. Sep 12 23:53:02.290306 systemd[1]: Running in initrd. Sep 12 23:53:02.292810 systemd[1]: No hostname configured, using default hostname. Sep 12 23:53:02.292844 systemd[1]: Hostname set to . Sep 12 23:53:02.292866 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:53:02.292887 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:53:02.292908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:53:02.292929 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:53:02.292952 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:53:02.292974 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:53:02.293009 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:53:02.293032 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:53:02.293057 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:53:02.293078 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:53:02.293099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:53:02.293345 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:53:02.293374 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:53:02.293404 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:53:02.293425 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:53:02.293446 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:53:02.293467 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:53:02.293540 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:53:02.293566 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:53:02.293587 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:53:02.293608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:53:02.293634 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:53:02.293656 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:53:02.293677 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:53:02.293698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:53:02.293718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:53:02.293739 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:53:02.293760 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:53:02.293780 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:53:02.293801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:53:02.293826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:53:02.293847 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:53:02.296274 systemd-journald[250]: Collecting audit messages is disabled. Sep 12 23:53:02.296370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:53:02.296408 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:53:02.296431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:53:02.296453 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:53:02.296474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:53:02.296500 systemd-journald[250]: Journal started Sep 12 23:53:02.296539 systemd-journald[250]: Runtime Journal (/run/log/journal/ec25038928d70bd341b700b06966426d) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:53:02.260405 systemd-modules-load[251]: Inserted module 'overlay' Sep 12 23:53:02.308755 kernel: Bridge firewalling registered Sep 12 23:53:02.308798 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:53:02.304342 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 12 23:53:02.319409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:53:02.322660 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:53:02.338690 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:53:02.349971 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:53:02.356799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:53:02.364713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:53:02.408734 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:53:02.416360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:53:02.426479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:53:02.439708 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:53:02.453751 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:53:02.465730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:53:02.500099 dracut-cmdline[288]: dracut-dracut-053 Sep 12 23:53:02.514025 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:53:02.569712 systemd-resolved[289]: Positive Trust Anchors: Sep 12 23:53:02.569742 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:53:02.569804 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:53:02.719363 kernel: SCSI subsystem initialized Sep 12 23:53:02.729350 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:53:02.741366 kernel: iscsi: registered transport (tcp) Sep 12 23:53:02.765561 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:53:02.765647 kernel: QLogic iSCSI HBA Driver Sep 12 23:53:02.833370 kernel: random: crng init done Sep 12 23:53:02.834033 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 12 23:53:02.838190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:53:02.843571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:53:02.873963 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:53:02.888738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:53:02.930633 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:53:02.930711 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:53:02.930738 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:53:02.998384 kernel: raid6: neonx8 gen() 6752 MB/s Sep 12 23:53:03.015356 kernel: raid6: neonx4 gen() 6541 MB/s Sep 12 23:53:03.032355 kernel: raid6: neonx2 gen() 5462 MB/s Sep 12 23:53:03.049355 kernel: raid6: neonx1 gen() 3956 MB/s Sep 12 23:53:03.066355 kernel: raid6: int64x8 gen() 3804 MB/s Sep 12 23:53:03.083355 kernel: raid6: int64x4 gen() 3725 MB/s Sep 12 23:53:03.100355 kernel: raid6: int64x2 gen() 3606 MB/s Sep 12 23:53:03.118337 kernel: raid6: int64x1 gen() 2780 MB/s Sep 12 23:53:03.118381 kernel: raid6: using algorithm neonx8 gen() 6752 MB/s Sep 12 23:53:03.137361 kernel: raid6: .... xor() 4880 MB/s, rmw enabled Sep 12 23:53:03.137396 kernel: raid6: using neon recovery algorithm Sep 12 23:53:03.146227 kernel: xor: measuring software checksum speed Sep 12 23:53:03.146279 kernel: 8regs : 10970 MB/sec Sep 12 23:53:03.148706 kernel: 32regs : 11054 MB/sec Sep 12 23:53:03.148739 kernel: arm64_neon : 9300 MB/sec Sep 12 23:53:03.148763 kernel: xor: using function: 32regs (11054 MB/sec) Sep 12 23:53:03.233382 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:53:03.252766 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:53:03.263769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:53:03.302974 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 12 23:53:03.312135 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:53:03.327000 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:53:03.370760 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 12 23:53:03.440441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:53:03.456602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:53:03.576629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:53:03.592955 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:53:03.637635 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:53:03.648700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:53:03.651877 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:53:03.654646 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:53:03.671091 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:53:03.730430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:53:03.804694 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:53:03.804766 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 23:53:03.812933 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 23:53:03.813344 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 23:53:03.818202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:53:03.821594 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:53:03.832733 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:53:03.835203 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:53:03.844392 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c6:23:22:eb:21 Sep 12 23:53:03.839626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:53:03.844240 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:53:03.858367 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 23:53:03.860096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:53:03.867593 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 23:53:03.869445 (udev-worker)[517]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:53:03.894369 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 23:53:03.896757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:53:03.908551 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:53:03.908634 kernel: GPT:9289727 != 16777215 Sep 12 23:53:03.908664 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:53:03.908698 kernel: GPT:9289727 != 16777215 Sep 12 23:53:03.908725 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:53:03.909922 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:53:03.912492 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:53:03.947802 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:53:04.014444 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (520) Sep 12 23:53:04.049289 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (528) Sep 12 23:53:04.111296 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 23:53:04.168713 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 23:53:04.183289 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 23:53:04.183564 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 23:53:04.197801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:53:04.224785 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:53:04.240501 disk-uuid[663]: Primary Header is updated. Sep 12 23:53:04.240501 disk-uuid[663]: Secondary Entries is updated. Sep 12 23:53:04.240501 disk-uuid[663]: Secondary Header is updated. Sep 12 23:53:04.255372 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:53:04.264038 kernel: GPT:disk_guids don't match. Sep 12 23:53:04.264117 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:53:04.265084 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:53:04.274371 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:53:05.277372 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:53:05.280929 disk-uuid[664]: The operation has completed successfully. Sep 12 23:53:05.478672 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:53:05.480863 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:53:05.502669 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:53:05.525195 sh[1005]: Success Sep 12 23:53:05.550409 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:53:05.670828 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:53:05.679577 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:53:05.690410 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:53:05.737214 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:53:05.737278 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:53:05.739179 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:53:05.740580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:53:05.741764 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:53:05.791353 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 23:53:05.806940 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:53:05.807973 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:53:05.824590 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:53:05.829011 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:53:05.866505 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:05.866574 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:53:05.868654 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:53:05.888359 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:53:05.909264 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:53:05.911702 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:05.921406 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:53:05.933716 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:53:06.033803 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:53:06.049043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:53:06.120011 systemd-networkd[1200]: lo: Link UP Sep 12 23:53:06.120034 systemd-networkd[1200]: lo: Gained carrier Sep 12 23:53:06.123105 systemd-networkd[1200]: Enumeration completed Sep 12 23:53:06.123277 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:53:06.126248 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:06.126256 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:53:06.129120 systemd-networkd[1200]: eth0: Link UP Sep 12 23:53:06.129129 systemd-networkd[1200]: eth0: Gained carrier Sep 12 23:53:06.129148 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:06.134187 systemd[1]: Reached target network.target - Network. Sep 12 23:53:06.165355 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.17.186/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:53:06.202912 ignition[1126]: Ignition 2.19.0 Sep 12 23:53:06.202946 ignition[1126]: Stage: fetch-offline Sep 12 23:53:06.204752 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:06.204778 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:06.205562 ignition[1126]: Ignition finished successfully Sep 12 23:53:06.214166 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:53:06.229776 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:53:06.256983 ignition[1209]: Ignition 2.19.0 Sep 12 23:53:06.257004 ignition[1209]: Stage: fetch Sep 12 23:53:06.258196 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:06.258223 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:06.258427 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:06.280572 ignition[1209]: PUT result: OK Sep 12 23:53:06.284052 ignition[1209]: parsed url from cmdline: "" Sep 12 23:53:06.284214 ignition[1209]: no config URL provided Sep 12 23:53:06.284239 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:53:06.284300 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:53:06.284569 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:06.293110 ignition[1209]: PUT result: OK Sep 12 23:53:06.293304 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 23:53:06.297940 ignition[1209]: GET result: OK Sep 12 23:53:06.298372 ignition[1209]: parsing config with SHA512: bc8b8f6df242b47da51d6b69a551221f2e163625afde4951461ebc5d110dca4cb505075287bf1ccdcdf76965a88bd290a589b8b4c1e54841183ff7113c8c4af7 Sep 12 23:53:06.309060 unknown[1209]: fetched base config from "system" Sep 12 23:53:06.309415 unknown[1209]: fetched base config from "system" Sep 12 23:53:06.309431 unknown[1209]: fetched user config from "aws" Sep 12 23:53:06.311612 ignition[1209]: fetch: fetch complete Sep 12 23:53:06.311643 ignition[1209]: fetch: fetch passed Sep 12 23:53:06.311796 ignition[1209]: Ignition finished successfully Sep 12 23:53:06.322680 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:53:06.335719 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:53:06.365591 ignition[1216]: Ignition 2.19.0 Sep 12 23:53:06.366098 ignition[1216]: Stage: kargs Sep 12 23:53:06.367636 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:06.367668 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:06.367835 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:06.376063 ignition[1216]: PUT result: OK Sep 12 23:53:06.381394 ignition[1216]: kargs: kargs passed Sep 12 23:53:06.381552 ignition[1216]: Ignition finished successfully Sep 12 23:53:06.386363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:53:06.404250 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:53:06.427822 ignition[1222]: Ignition 2.19.0 Sep 12 23:53:06.427842 ignition[1222]: Stage: disks Sep 12 23:53:06.429017 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:06.429044 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:06.430289 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:06.442096 ignition[1222]: PUT result: OK Sep 12 23:53:06.446632 ignition[1222]: disks: disks passed Sep 12 23:53:06.446956 ignition[1222]: Ignition finished successfully Sep 12 23:53:06.450539 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:53:06.455783 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:53:06.458310 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:53:06.463818 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:53:06.468902 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:53:06.473315 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:53:06.489765 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:53:06.538945 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 23:53:06.546393 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:53:06.558774 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:53:06.645376 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:53:06.647015 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:53:06.651577 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:53:06.671528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:53:06.679810 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:53:06.682343 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:53:06.682446 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:53:06.682493 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:53:06.716353 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1249) Sep 12 23:53:06.718224 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:53:06.726964 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:06.727001 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:53:06.727028 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:53:06.736793 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:53:06.747383 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:53:06.750273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:53:06.833361 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:53:06.843864 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:53:06.852942 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:53:06.861557 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:53:07.017805 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:53:07.028713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:53:07.033634 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:53:07.058605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:53:07.063519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:07.093056 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:53:07.121988 ignition[1363]: INFO : Ignition 2.19.0 Sep 12 23:53:07.125170 ignition[1363]: INFO : Stage: mount Sep 12 23:53:07.125170 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:07.125170 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:07.125170 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:07.140182 ignition[1363]: INFO : PUT result: OK Sep 12 23:53:07.147365 ignition[1363]: INFO : mount: mount passed Sep 12 23:53:07.150421 ignition[1363]: INFO : Ignition finished successfully Sep 12 23:53:07.156131 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:53:07.167684 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:53:07.190480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:53:07.231372 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1373) Sep 12 23:53:07.235734 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:07.235784 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:53:07.235811 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:53:07.242359 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:53:07.246585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:53:07.285722 ignition[1391]: INFO : Ignition 2.19.0 Sep 12 23:53:07.285722 ignition[1391]: INFO : Stage: files Sep 12 23:53:07.290086 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:07.290086 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:07.290086 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:07.302688 ignition[1391]: INFO : PUT result: OK Sep 12 23:53:07.307560 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:53:07.310847 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:53:07.310847 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:53:07.317990 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:53:07.321056 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:53:07.324255 unknown[1391]: wrote ssh authorized keys file for user: core Sep 12 23:53:07.326664 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:53:07.332227 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:53:07.336582 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 23:53:07.437574 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:53:07.562557 systemd-networkd[1200]: eth0: Gained IPv6LL Sep 12 23:53:08.372903 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:53:08.372903 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:53:08.372903 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 23:53:08.578514 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:53:08.710404 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:53:08.710404 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:53:08.710404 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:53:08.710404 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:08.725233 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 23:53:08.990193 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:53:09.352515 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:09.352515 ignition[1391]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:53:09.359961 ignition[1391]: INFO : files: files passed Sep 12 23:53:09.359961 ignition[1391]: INFO : Ignition finished successfully Sep 12 23:53:09.376095 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:53:09.393732 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:53:09.403694 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:53:09.418495 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:53:09.421463 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:53:09.441593 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:53:09.441593 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:53:09.450169 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:53:09.455465 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:53:09.459106 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:53:09.476584 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:53:09.536600 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:53:09.536810 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:53:09.539984 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:53:09.542379 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:53:09.545276 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:53:09.560632 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:53:09.603793 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:53:09.614849 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:53:09.649699 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:53:09.652242 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:53:09.652747 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:53:09.653444 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:53:09.653776 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:53:09.655235 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:53:09.661286 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:53:09.661749 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:53:09.662108 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:53:09.663225 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:53:09.666903 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:53:09.667682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:53:09.667967 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:53:09.668435 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:53:09.668793 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:53:09.669060 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:53:09.669399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:53:09.670208 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:53:09.671049 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:53:09.673606 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:53:09.687621 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:53:09.690085 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:53:09.690356 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:53:09.691132 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:53:09.692073 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:53:09.753341 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:53:09.754069 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:53:09.769393 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:53:09.776832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:53:09.781488 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:53:09.782459 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:53:09.794316 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:53:09.796924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:53:09.816255 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:53:09.817187 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:53:09.836222 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:53:09.843619 ignition[1442]: INFO : Ignition 2.19.0 Sep 12 23:53:09.843619 ignition[1442]: INFO : Stage: umount Sep 12 23:53:09.847563 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:09.847563 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:09.847563 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:09.858063 ignition[1442]: INFO : PUT result: OK Sep 12 23:53:09.861383 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:53:09.863509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:53:09.868988 ignition[1442]: INFO : umount: umount passed Sep 12 23:53:09.868988 ignition[1442]: INFO : Ignition finished successfully Sep 12 23:53:09.870132 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:53:09.870812 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:53:09.871852 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:53:09.871953 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:53:09.878269 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:53:09.878406 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:53:09.882522 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:53:09.882625 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:53:09.886869 systemd[1]: Stopped target network.target - Network. Sep 12 23:53:09.899280 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:53:09.899579 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:53:09.908188 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:53:09.912952 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:53:09.919012 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:53:09.924988 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:53:09.929409 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:53:09.931639 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:53:09.931732 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:53:09.940753 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:53:09.940852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:53:09.943196 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:53:09.943299 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:53:09.945547 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:53:09.945642 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:53:09.948277 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:53:09.948405 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:53:09.951070 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:53:09.953626 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:53:09.962488 systemd-networkd[1200]: eth0: DHCPv6 lease lost Sep 12 23:53:09.962816 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:53:09.963079 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:53:09.989127 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:53:09.993790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:53:10.000563 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:53:10.000658 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:53:10.012651 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:53:10.015023 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:53:10.016077 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:53:10.021441 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:53:10.021555 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:53:10.026246 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:53:10.026380 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:53:10.026557 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:53:10.026635 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:53:10.040601 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:53:10.075072 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:53:10.075431 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:53:10.079520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:53:10.079693 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:53:10.082546 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:53:10.082651 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:53:10.083065 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:53:10.083162 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:53:10.086209 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:53:10.086310 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:53:10.103678 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:53:10.105740 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:53:10.125627 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:53:10.130761 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:53:10.130890 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:53:10.134273 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:53:10.134405 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:53:10.137737 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:53:10.137831 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:53:10.140884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:53:10.140975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:53:10.146137 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:53:10.146782 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:53:10.188069 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:53:10.190530 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:53:10.196644 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:53:10.209619 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:53:10.230175 systemd[1]: Switching root. Sep 12 23:53:10.277209 systemd-journald[250]: Journal stopped Sep 12 23:53:12.474854 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 12 23:53:12.475009 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:53:12.475056 kernel: SELinux: policy capability open_perms=1 Sep 12 23:53:12.475089 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:53:12.475121 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:53:12.475157 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:53:12.475190 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:53:12.475221 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:53:12.475252 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:53:12.475282 kernel: audit: type=1403 audit(1757721190.667:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:53:12.481419 systemd[1]: Successfully loaded SELinux policy in 56.753ms. Sep 12 23:53:12.481521 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.797ms. Sep 12 23:53:12.481557 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:53:12.481590 systemd[1]: Detected virtualization amazon. Sep 12 23:53:12.481631 systemd[1]: Detected architecture arm64. Sep 12 23:53:12.481663 systemd[1]: Detected first boot. Sep 12 23:53:12.481709 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:53:12.481744 zram_generator::config[1484]: No configuration found. Sep 12 23:53:12.481783 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:53:12.481817 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:53:12.481851 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:53:12.481886 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:53:12.481931 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:53:12.481968 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:53:12.482002 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:53:12.482034 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:53:12.482070 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:53:12.482105 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:53:12.482138 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:53:12.482170 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:53:12.482202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:53:12.482241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:53:12.482274 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:53:12.482307 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:53:12.482383 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:53:12.482422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:53:12.482456 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:53:12.482492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:53:12.482526 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:53:12.482560 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:53:12.482603 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:53:12.482635 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:53:12.482671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:53:12.482704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:53:12.482737 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:53:12.482771 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:53:12.482802 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:53:12.485392 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:53:12.485474 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:53:12.485507 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:53:12.485541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:53:12.485573 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:53:12.485604 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:53:12.485634 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:53:12.485667 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:53:12.485699 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:53:12.485734 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:53:12.485772 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:53:12.485806 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:53:12.485842 systemd[1]: Reached target machines.target - Containers. Sep 12 23:53:12.485875 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:53:12.485907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:53:12.485940 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:53:12.485971 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:53:12.486003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:53:12.486044 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:53:12.486078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:53:12.486110 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:53:12.486141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:53:12.486173 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:53:12.486209 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:53:12.486241 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:53:12.486275 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:53:12.486306 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:53:12.486415 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:53:12.486449 kernel: ACPI: bus type drm_connector registered Sep 12 23:53:12.486480 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:53:12.486510 kernel: loop: module loaded Sep 12 23:53:12.486546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:53:12.486580 kernel: fuse: init (API version 7.39) Sep 12 23:53:12.486610 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:53:12.486640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:53:12.486674 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:53:12.486713 systemd[1]: Stopped verity-setup.service. Sep 12 23:53:12.486747 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:53:12.486780 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:53:12.486821 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:53:12.486853 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:53:12.486884 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:53:12.486916 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:53:12.486955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:53:12.486986 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:53:12.487016 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:53:12.487047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:53:12.487082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:53:12.487113 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:53:12.487151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:53:12.487186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:53:12.487221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:53:12.487254 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:53:12.487286 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:53:12.489391 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:53:12.489525 systemd-journald[1566]: Collecting audit messages is disabled. Sep 12 23:53:12.489599 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:53:12.489633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:53:12.489665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:53:12.489700 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:53:12.489735 systemd-journald[1566]: Journal started Sep 12 23:53:12.489792 systemd-journald[1566]: Runtime Journal (/run/log/journal/ec25038928d70bd341b700b06966426d) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:53:11.767275 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:53:11.793757 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 23:53:11.794708 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:53:12.492578 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:53:12.500854 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:53:12.534447 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:53:12.549678 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:53:12.561584 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:53:12.568087 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:53:12.568167 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:53:12.575196 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:53:12.589702 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:53:12.596282 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:53:12.601408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:53:12.615649 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:53:12.624033 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:53:12.626752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:53:12.630877 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:53:12.633557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:53:12.638593 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:53:12.658483 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:53:12.666697 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:53:12.676408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:53:12.680255 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:53:12.685382 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:53:12.690437 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:53:12.702210 systemd-journald[1566]: Time spent on flushing to /var/log/journal/ec25038928d70bd341b700b06966426d is 86.378ms for 915 entries. Sep 12 23:53:12.702210 systemd-journald[1566]: System Journal (/var/log/journal/ec25038928d70bd341b700b06966426d) is 8.0M, max 195.6M, 187.6M free. Sep 12 23:53:12.826031 systemd-journald[1566]: Received client request to flush runtime journal. Sep 12 23:53:12.826142 kernel: loop0: detected capacity change from 0 to 52536 Sep 12 23:53:12.717819 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:53:12.759712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:53:12.772193 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:53:12.780213 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:53:12.801722 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:53:12.829601 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:53:12.870236 udevadm[1620]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:53:12.881099 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Sep 12 23:53:12.881137 systemd-tmpfiles[1614]: ACLs are not supported, ignoring. Sep 12 23:53:12.892993 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:53:12.904671 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:53:12.909021 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:53:12.921425 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:53:12.925577 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:53:12.972397 kernel: loop1: detected capacity change from 0 to 114432 Sep 12 23:53:13.001942 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:53:13.016598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:53:13.043401 kernel: loop2: detected capacity change from 0 to 203944 Sep 12 23:53:13.073277 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Sep 12 23:53:13.076037 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Sep 12 23:53:13.096251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:53:13.189559 kernel: loop3: detected capacity change from 0 to 114328 Sep 12 23:53:13.253952 kernel: loop4: detected capacity change from 0 to 52536 Sep 12 23:53:13.279424 kernel: loop5: detected capacity change from 0 to 114432 Sep 12 23:53:13.310416 kernel: loop6: detected capacity change from 0 to 203944 Sep 12 23:53:13.349386 kernel: loop7: detected capacity change from 0 to 114328 Sep 12 23:53:13.383756 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 23:53:13.385558 (sd-merge)[1643]: Merged extensions into '/usr'. Sep 12 23:53:13.400579 systemd[1]: Reloading requested from client PID 1613 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:53:13.400616 systemd[1]: Reloading... Sep 12 23:53:13.615444 ldconfig[1608]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:53:13.653406 zram_generator::config[1669]: No configuration found. Sep 12 23:53:13.959806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:53:14.089388 systemd[1]: Reloading finished in 687 ms. Sep 12 23:53:14.151447 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:53:14.155143 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:53:14.158814 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:53:14.175776 systemd[1]: Starting ensure-sysext.service... Sep 12 23:53:14.188742 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:53:14.207629 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:53:14.214417 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:53:14.214458 systemd[1]: Reloading... Sep 12 23:53:14.295761 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:53:14.298684 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:53:14.303906 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:53:14.304657 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Sep 12 23:53:14.304838 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Sep 12 23:53:14.310196 systemd-udevd[1724]: Using default interface naming scheme 'v255'. Sep 12 23:53:14.315936 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:53:14.315967 systemd-tmpfiles[1723]: Skipping /boot Sep 12 23:53:14.361588 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:53:14.361620 systemd-tmpfiles[1723]: Skipping /boot Sep 12 23:53:14.404478 zram_generator::config[1749]: No configuration found. Sep 12 23:53:14.627590 (udev-worker)[1770]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:53:14.784401 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1767) Sep 12 23:53:14.952754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:53:15.119997 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 23:53:15.120711 systemd[1]: Reloading finished in 905 ms. Sep 12 23:53:15.162235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:53:15.183284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:53:15.257434 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:53:15.282547 systemd[1]: Finished ensure-sysext.service. Sep 12 23:53:15.310420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:53:15.320756 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:53:15.335914 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:53:15.344015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:53:15.346701 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:53:15.355574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:53:15.361611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:53:15.371764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:53:15.401189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:53:15.406997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:53:15.411760 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:53:15.419762 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:53:15.430682 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:53:15.441626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:53:15.445453 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:53:15.458817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:53:15.468289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:53:15.473234 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:53:15.474716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:53:15.478050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:53:15.479534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:53:15.501851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:53:15.530945 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:53:15.534181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:53:15.535592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:53:15.541238 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:53:15.541682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:53:15.550899 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:53:15.561580 lvm[1920]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:53:15.611182 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:53:15.623644 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:53:15.673679 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:53:15.686751 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:53:15.694304 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:53:15.700048 augenrules[1956]: No rules Sep 12 23:53:15.703212 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:53:15.712032 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:53:15.713707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:53:15.728857 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:53:15.729030 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:53:15.753718 lvm[1965]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:53:15.766862 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:53:15.794613 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:53:15.818203 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:53:15.871876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:53:15.948448 systemd-networkd[1933]: lo: Link UP Sep 12 23:53:15.948473 systemd-networkd[1933]: lo: Gained carrier Sep 12 23:53:15.951729 systemd-networkd[1933]: Enumeration completed Sep 12 23:53:15.951957 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:53:15.955095 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:15.955103 systemd-networkd[1933]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:53:15.960315 systemd-networkd[1933]: eth0: Link UP Sep 12 23:53:15.962801 systemd-networkd[1933]: eth0: Gained carrier Sep 12 23:53:15.962850 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:15.963572 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:53:15.975916 systemd-resolved[1934]: Positive Trust Anchors: Sep 12 23:53:15.975967 systemd-resolved[1934]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:53:15.976034 systemd-resolved[1934]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:53:15.978027 systemd-networkd[1933]: eth0: DHCPv4 address 172.31.17.186/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:53:15.992600 systemd-resolved[1934]: Defaulting to hostname 'linux'. Sep 12 23:53:15.996546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:53:15.999489 systemd[1]: Reached target network.target - Network. Sep 12 23:53:16.001818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:53:16.004713 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:53:16.007567 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:53:16.010664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:53:16.014069 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:53:16.016960 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:53:16.020030 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:53:16.022914 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:53:16.023197 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:53:16.025435 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:53:16.029464 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:53:16.035005 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:53:16.052943 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:53:16.056646 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:53:16.059575 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:53:16.061881 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:53:16.064219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:53:16.064564 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:53:16.075668 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:53:16.084982 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:53:16.091814 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:53:16.097886 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:53:16.111812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:53:16.114748 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:53:16.120160 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:53:16.128476 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 23:53:16.154532 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:53:16.160903 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 23:53:16.168538 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:53:16.176724 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:53:16.189767 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:53:16.194222 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:53:16.197276 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:53:16.221429 jq[1984]: false Sep 12 23:53:16.202770 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:53:16.211911 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:53:16.224706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:53:16.229008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:53:16.258548 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:53:16.260466 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:53:16.305090 extend-filesystems[1985]: Found loop4 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found loop5 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found loop6 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found loop7 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found nvme0n1 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found nvme0n1p1 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found nvme0n1p2 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found nvme0n1p3 Sep 12 23:53:16.309503 extend-filesystems[1985]: Found usr Sep 12 23:53:16.345025 extend-filesystems[1985]: Found nvme0n1p4 Sep 12 23:53:16.345025 extend-filesystems[1985]: Found nvme0n1p6 Sep 12 23:53:16.345025 extend-filesystems[1985]: Found nvme0n1p7 Sep 12 23:53:16.345025 extend-filesystems[1985]: Found nvme0n1p9 Sep 12 23:53:16.345025 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Sep 12 23:53:16.370199 dbus-daemon[1983]: [system] SELinux support is enabled Sep 12 23:53:16.376706 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:53:16.386117 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:53:16.396260 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1933 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 23:53:16.386185 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:53:16.389597 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:53:16.389639 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:53:16.400001 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Sep 12 23:53:16.416896 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 23:53:16.440598 extend-filesystems[2015]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:53:16.449760 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: ---------------------------------------------------- Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 23:53:16.458380 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: ---------------------------------------------------- Sep 12 23:53:16.453465 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:53:16.459667 jq[1996]: true Sep 12 23:53:16.453489 ntpd[1987]: ---------------------------------------------------- Sep 12 23:53:16.453509 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:53:16.453528 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:53:16.453548 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 23:53:16.453568 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 23:53:16.453587 ntpd[1987]: ---------------------------------------------------- Sep 12 23:53:16.464246 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 12 23:53:16.463849 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 12 23:53:16.467743 ntpd[1987]: basedate set to 2025-08-31 Sep 12 23:53:16.471559 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: basedate set to 2025-08-31 Sep 12 23:53:16.471559 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 23:53:16.467796 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 23:53:16.474992 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:53:16.475687 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:53:16.475687 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:53:16.475086 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:53:16.475991 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:53:16.476157 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:53:16.476348 ntpd[1987]: Listen normally on 3 eth0 172.31.17.186:123 Sep 12 23:53:16.476497 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Listen normally on 3 eth0 172.31.17.186:123 Sep 12 23:53:16.476637 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 23:53:16.476752 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 23:53:16.476935 ntpd[1987]: bind(21) AF_INET6 fe80::4c6:23ff:fe22:eb21%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:53:16.477871 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: bind(21) AF_INET6 fe80::4c6:23ff:fe22:eb21%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:53:16.478070 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4c6:23ff:fe22:eb21%2#123 Sep 12 23:53:16.483390 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4c6:23ff:fe22:eb21%2#123 Sep 12 23:53:16.483390 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: failed to init interface for address fe80::4c6:23ff:fe22:eb21%2 Sep 12 23:53:16.483390 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 23:53:16.480126 ntpd[1987]: failed to init interface for address fe80::4c6:23ff:fe22:eb21%2 Sep 12 23:53:16.480230 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 23:53:16.491030 tar[1998]: linux-arm64/helm Sep 12 23:53:16.494390 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:16.494605 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:16.494713 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:16.494836 ntpd[1987]: 12 Sep 23:53:16 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:16.512386 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 23:53:16.529053 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:53:16.530937 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 23:53:16.532503 systemd-logind[1993]: New seat seat0. Sep 12 23:53:16.536846 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:53:16.568282 (ntainerd)[2019]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:53:16.571822 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:53:16.572251 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:53:16.595494 jq[2024]: true Sep 12 23:53:16.709621 update_engine[1994]: I20250912 23:53:16.707141 1994 main.cc:92] Flatcar Update Engine starting Sep 12 23:53:16.728188 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 23:53:16.731241 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:53:16.742476 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 23:53:16.742580 update_engine[1994]: I20250912 23:53:16.739645 1994 update_check_scheduler.cc:74] Next update check in 8m8s Sep 12 23:53:16.748010 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:53:16.755413 bash[2050]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:53:16.757212 extend-filesystems[2015]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 23:53:16.757212 extend-filesystems[2015]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:53:16.757212 extend-filesystems[2015]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 23:53:16.773337 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Sep 12 23:53:16.762273 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:53:16.791293 systemd[1]: Starting sshkeys.service... Sep 12 23:53:16.793885 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:53:16.794398 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:53:16.897106 coreos-metadata[1982]: Sep 12 23:53:16.897 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:53:16.913536 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 23:53:16.920596 coreos-metadata[1982]: Sep 12 23:53:16.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 23:53:16.927957 coreos-metadata[1982]: Sep 12 23:53:16.927 INFO Fetch successful Sep 12 23:53:16.929717 coreos-metadata[1982]: Sep 12 23:53:16.929 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 23:53:16.942938 coreos-metadata[1982]: Sep 12 23:53:16.942 INFO Fetch successful Sep 12 23:53:16.942938 coreos-metadata[1982]: Sep 12 23:53:16.942 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 23:53:16.943761 coreos-metadata[1982]: Sep 12 23:53:16.943 INFO Fetch successful Sep 12 23:53:16.943761 coreos-metadata[1982]: Sep 12 23:53:16.943 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 23:53:16.951991 coreos-metadata[1982]: Sep 12 23:53:16.951 INFO Fetch successful Sep 12 23:53:16.951991 coreos-metadata[1982]: Sep 12 23:53:16.951 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 23:53:16.954006 coreos-metadata[1982]: Sep 12 23:53:16.953 INFO Fetch failed with 404: resource not found Sep 12 23:53:16.954006 coreos-metadata[1982]: Sep 12 23:53:16.954 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 23:53:16.958373 coreos-metadata[1982]: Sep 12 23:53:16.957 INFO Fetch successful Sep 12 23:53:16.958373 coreos-metadata[1982]: Sep 12 23:53:16.957 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 23:53:16.960666 coreos-metadata[1982]: Sep 12 23:53:16.960 INFO Fetch successful Sep 12 23:53:16.960666 coreos-metadata[1982]: Sep 12 23:53:16.960 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 23:53:16.968937 coreos-metadata[1982]: Sep 12 23:53:16.968 INFO Fetch successful Sep 12 23:53:16.968937 coreos-metadata[1982]: Sep 12 23:53:16.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 23:53:16.970389 coreos-metadata[1982]: Sep 12 23:53:16.969 INFO Fetch successful Sep 12 23:53:16.970389 coreos-metadata[1982]: Sep 12 23:53:16.969 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 23:53:16.975857 coreos-metadata[1982]: Sep 12 23:53:16.975 INFO Fetch successful Sep 12 23:53:16.993149 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 23:53:17.008658 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1767) Sep 12 23:53:17.034532 systemd-networkd[1933]: eth0: Gained IPv6LL Sep 12 23:53:17.051796 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:53:17.055898 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:53:17.130000 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 23:53:17.151947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:17.163314 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:53:17.231057 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:53:17.234724 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:53:17.330341 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:53:17.348947 coreos-metadata[2063]: Sep 12 23:53:17.348 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:53:17.361598 coreos-metadata[2063]: Sep 12 23:53:17.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 23:53:17.369236 coreos-metadata[2063]: Sep 12 23:53:17.363 INFO Fetch successful Sep 12 23:53:17.369236 coreos-metadata[2063]: Sep 12 23:53:17.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 23:53:17.370545 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:53:17.386499 coreos-metadata[2063]: Sep 12 23:53:17.370 INFO Fetch successful Sep 12 23:53:17.387059 locksmithd[2053]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:53:17.404589 unknown[2063]: wrote ssh authorized keys file for user: core Sep 12 23:53:17.442582 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 23:53:17.446046 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 23:53:17.462541 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2013 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 23:53:17.470734 amazon-ssm-agent[2103]: Initializing new seelog logger Sep 12 23:53:17.470734 amazon-ssm-agent[2103]: New Seelog Logger Creation Complete Sep 12 23:53:17.470734 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.470734 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.476355 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 processing appconfig overrides Sep 12 23:53:17.490051 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.490051 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.490204 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 processing appconfig overrides Sep 12 23:53:17.491660 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.491660 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.491794 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 processing appconfig overrides Sep 12 23:53:17.497129 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO Proxy environment variables: Sep 12 23:53:17.513382 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.513382 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:17.513382 amazon-ssm-agent[2103]: 2025/09/12 23:53:17 processing appconfig overrides Sep 12 23:53:17.518301 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 23:53:17.555090 update-ssh-keys[2157]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:53:17.559459 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 23:53:17.573077 systemd[1]: Finished sshkeys.service. Sep 12 23:53:17.607577 polkitd[2158]: Started polkitd version 121 Sep 12 23:53:17.611930 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO https_proxy: Sep 12 23:53:17.686296 polkitd[2158]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 23:53:17.686439 polkitd[2158]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 23:53:17.696337 polkitd[2158]: Finished loading, compiling and executing 2 rules Sep 12 23:53:17.708309 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO http_proxy: Sep 12 23:53:17.712687 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 23:53:17.714451 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 23:53:17.721673 polkitd[2158]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 23:53:17.764374 containerd[2019]: time="2025-09-12T23:53:17.761029152Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:53:17.781224 systemd-hostnamed[2013]: Hostname set to (transient) Sep 12 23:53:17.781592 systemd-resolved[1934]: System hostname changed to 'ip-172-31-17-186'. Sep 12 23:53:17.813075 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO no_proxy: Sep 12 23:53:17.914741 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO Checking if agent identity type OnPrem can be assumed Sep 12 23:53:17.989357 sshd_keygen[2030]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:53:17.993519 containerd[2019]: time="2025-09-12T23:53:17.991966489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:18.003847 containerd[2019]: time="2025-09-12T23:53:18.003264501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:18.003847 containerd[2019]: time="2025-09-12T23:53:18.003428997Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:53:18.003847 containerd[2019]: time="2025-09-12T23:53:18.003487869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:53:18.003847 containerd[2019]: time="2025-09-12T23:53:18.003818337Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:53:18.004109 containerd[2019]: time="2025-09-12T23:53:18.003862953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:18.004109 containerd[2019]: time="2025-09-12T23:53:18.004001973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:18.004109 containerd[2019]: time="2025-09-12T23:53:18.004031805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:18.007986 containerd[2019]: time="2025-09-12T23:53:18.007624653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:18.007986 containerd[2019]: time="2025-09-12T23:53:18.007685901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:18.007986 containerd[2019]: time="2025-09-12T23:53:18.007734837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:18.007986 containerd[2019]: time="2025-09-12T23:53:18.007764381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:18.008238 containerd[2019]: time="2025-09-12T23:53:18.008013225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:18.014094 containerd[2019]: time="2025-09-12T23:53:18.013535662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:18.014094 containerd[2019]: time="2025-09-12T23:53:18.013853758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:18.014094 containerd[2019]: time="2025-09-12T23:53:18.013891882Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:53:18.014687 containerd[2019]: time="2025-09-12T23:53:18.014138170Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:53:18.014687 containerd[2019]: time="2025-09-12T23:53:18.014250622Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:53:18.016443 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO Checking if agent identity type EC2 can be assumed Sep 12 23:53:18.024463 containerd[2019]: time="2025-09-12T23:53:18.024379102Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:53:18.024614 containerd[2019]: time="2025-09-12T23:53:18.024492586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:53:18.024614 containerd[2019]: time="2025-09-12T23:53:18.024541330Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:53:18.024614 containerd[2019]: time="2025-09-12T23:53:18.024585490Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:53:18.024739 containerd[2019]: time="2025-09-12T23:53:18.024631246Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:53:18.024949 containerd[2019]: time="2025-09-12T23:53:18.024907018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:53:18.025399 containerd[2019]: time="2025-09-12T23:53:18.025304770Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:53:18.025655 containerd[2019]: time="2025-09-12T23:53:18.025612186Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:53:18.025735 containerd[2019]: time="2025-09-12T23:53:18.025659334Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:53:18.025735 containerd[2019]: time="2025-09-12T23:53:18.025693462Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:53:18.025735 containerd[2019]: time="2025-09-12T23:53:18.025726426Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.025857 containerd[2019]: time="2025-09-12T23:53:18.025758202Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.025857 containerd[2019]: time="2025-09-12T23:53:18.025790758Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.025857 containerd[2019]: time="2025-09-12T23:53:18.025824922Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.025983 containerd[2019]: time="2025-09-12T23:53:18.025857190Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.025983 containerd[2019]: time="2025-09-12T23:53:18.025886938Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.025983 containerd[2019]: time="2025-09-12T23:53:18.025918498Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.025983 containerd[2019]: time="2025-09-12T23:53:18.025948270Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:53:18.026136 containerd[2019]: time="2025-09-12T23:53:18.025989286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026136 containerd[2019]: time="2025-09-12T23:53:18.026020222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026136 containerd[2019]: time="2025-09-12T23:53:18.026049358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026136 containerd[2019]: time="2025-09-12T23:53:18.026093050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026136 containerd[2019]: time="2025-09-12T23:53:18.026122942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026388 containerd[2019]: time="2025-09-12T23:53:18.026153482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026388 containerd[2019]: time="2025-09-12T23:53:18.026181562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026388 containerd[2019]: time="2025-09-12T23:53:18.026211022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026388 containerd[2019]: time="2025-09-12T23:53:18.026240566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026388 containerd[2019]: time="2025-09-12T23:53:18.026273578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.026388 containerd[2019]: time="2025-09-12T23:53:18.026301202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.031504138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.031578130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.031631794Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.031691158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.031721614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.031749706Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.031988182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.032278534Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.032308138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.032575150Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.032607598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.032640238Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.032664094Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:53:18.033400 containerd[2019]: time="2025-09-12T23:53:18.032689630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:53:18.036889 containerd[2019]: time="2025-09-12T23:53:18.036726646Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:53:18.037148 containerd[2019]: time="2025-09-12T23:53:18.036890434Z" level=info msg="Connect containerd service" Sep 12 23:53:18.037148 containerd[2019]: time="2025-09-12T23:53:18.036961138Z" level=info msg="using legacy CRI server" Sep 12 23:53:18.037148 containerd[2019]: time="2025-09-12T23:53:18.036979702Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:53:18.037312 containerd[2019]: time="2025-09-12T23:53:18.037174078Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:53:18.054202 containerd[2019]: time="2025-09-12T23:53:18.054071338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:53:18.057283 containerd[2019]: time="2025-09-12T23:53:18.057166930Z" level=info msg="Start subscribing containerd event" Sep 12 23:53:18.057432 containerd[2019]: time="2025-09-12T23:53:18.057305122Z" level=info msg="Start recovering state" Sep 12 23:53:18.057542 containerd[2019]: time="2025-09-12T23:53:18.057493258Z" level=info msg="Start event monitor" Sep 12 23:53:18.057638 containerd[2019]: time="2025-09-12T23:53:18.057539614Z" level=info msg="Start snapshots syncer" Sep 12 23:53:18.057638 containerd[2019]: time="2025-09-12T23:53:18.057565810Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:53:18.057638 containerd[2019]: time="2025-09-12T23:53:18.057586534Z" level=info msg="Start streaming server" Sep 12 23:53:18.059345 containerd[2019]: time="2025-09-12T23:53:18.058228426Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:53:18.063098 containerd[2019]: time="2025-09-12T23:53:18.062618158Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:53:18.062873 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:53:18.064540 containerd[2019]: time="2025-09-12T23:53:18.063863134Z" level=info msg="containerd successfully booted in 0.308127s" Sep 12 23:53:18.125053 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO Agent will take identity from EC2 Sep 12 23:53:18.145593 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:53:18.158854 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:53:18.165168 systemd[1]: Started sshd@0-172.31.17.186:22-147.75.109.163:58748.service - OpenSSH per-connection server daemon (147.75.109.163:58748). Sep 12 23:53:18.204914 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:53:18.205721 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:53:18.218917 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:53:18.225066 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:53:18.291167 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:53:18.308047 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:53:18.325126 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:53:18.328037 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:53:18.340290 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:53:18.442521 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:53:18.488646 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 58748 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:18.494992 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:18.528151 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:53:18.543492 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 23:53:18.543893 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:53:18.560106 systemd-logind[1993]: New session 1 of user core. Sep 12 23:53:18.588431 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:53:18.608392 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:53:18.630832 tar[1998]: linux-arm64/LICENSE Sep 12 23:53:18.630832 tar[1998]: linux-arm64/README.md Sep 12 23:53:18.636055 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:53:18.651818 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 23:53:18.678722 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:53:18.752314 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 23:53:18.852675 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 23:53:18.924578 systemd[2227]: Queued start job for default target default.target. Sep 12 23:53:18.936685 systemd[2227]: Created slice app.slice - User Application Slice. Sep 12 23:53:18.936755 systemd[2227]: Reached target paths.target - Paths. Sep 12 23:53:18.936791 systemd[2227]: Reached target timers.target - Timers. Sep 12 23:53:18.940173 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:53:18.952635 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [Registrar] Starting registrar module Sep 12 23:53:18.980881 amazon-ssm-agent[2103]: 2025-09-12 23:53:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 23:53:18.980881 amazon-ssm-agent[2103]: 2025-09-12 23:53:18 INFO [EC2Identity] EC2 registration was successful. Sep 12 23:53:18.980881 amazon-ssm-agent[2103]: 2025-09-12 23:53:18 INFO [CredentialRefresher] credentialRefresher has started Sep 12 23:53:18.980881 amazon-ssm-agent[2103]: 2025-09-12 23:53:18 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 23:53:18.980881 amazon-ssm-agent[2103]: 2025-09-12 23:53:18 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 23:53:18.986899 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:53:18.987184 systemd[2227]: Reached target sockets.target - Sockets. Sep 12 23:53:18.987221 systemd[2227]: Reached target basic.target - Basic System. Sep 12 23:53:18.987513 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:53:18.988854 systemd[2227]: Reached target default.target - Main User Target. Sep 12 23:53:18.989525 systemd[2227]: Startup finished in 330ms. Sep 12 23:53:18.995765 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:53:19.053199 amazon-ssm-agent[2103]: 2025-09-12 23:53:18 INFO [CredentialRefresher] Next credential rotation will be in 31.183324399233335 minutes Sep 12 23:53:19.158930 systemd[1]: Started sshd@1-172.31.17.186:22-147.75.109.163:58764.service - OpenSSH per-connection server daemon (147.75.109.163:58764). Sep 12 23:53:19.362272 sshd[2241]: Accepted publickey for core from 147.75.109.163 port 58764 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:19.365151 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:19.375253 systemd-logind[1993]: New session 2 of user core. Sep 12 23:53:19.382641 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:53:19.454642 ntpd[1987]: Listen normally on 6 eth0 [fe80::4c6:23ff:fe22:eb21%2]:123 Sep 12 23:53:19.455481 ntpd[1987]: 12 Sep 23:53:19 ntpd[1987]: Listen normally on 6 eth0 [fe80::4c6:23ff:fe22:eb21%2]:123 Sep 12 23:53:19.514563 sshd[2241]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:19.521215 systemd[1]: sshd@1-172.31.17.186:22-147.75.109.163:58764.service: Deactivated successfully. Sep 12 23:53:19.524605 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:53:19.528722 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:53:19.532497 systemd-logind[1993]: Removed session 2. Sep 12 23:53:19.554063 systemd[1]: Started sshd@2-172.31.17.186:22-147.75.109.163:58776.service - OpenSSH per-connection server daemon (147.75.109.163:58776). Sep 12 23:53:19.721666 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 58776 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:19.725146 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:19.734586 systemd-logind[1993]: New session 3 of user core. Sep 12 23:53:19.746735 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:53:19.876499 sshd[2248]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:19.882466 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:53:19.883314 systemd[1]: sshd@2-172.31.17.186:22-147.75.109.163:58776.service: Deactivated successfully. Sep 12 23:53:19.886137 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:53:19.890744 systemd-logind[1993]: Removed session 3. Sep 12 23:53:20.009666 amazon-ssm-agent[2103]: 2025-09-12 23:53:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 23:53:20.111388 amazon-ssm-agent[2103]: 2025-09-12 23:53:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2255) started Sep 12 23:53:20.211987 amazon-ssm-agent[2103]: 2025-09-12 23:53:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 23:53:20.483681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:20.487302 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:53:20.491543 systemd[1]: Startup finished in 1.210s (kernel) + 8.846s (initrd) + 9.879s (userspace) = 19.936s. Sep 12 23:53:20.493904 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:53:21.791391 kubelet[2269]: E0912 23:53:21.791298 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:53:21.796234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:53:21.797058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:53:21.798544 systemd[1]: kubelet.service: Consumed 1.474s CPU time. Sep 12 23:53:23.789027 systemd-resolved[1934]: Clock change detected. Flushing caches. Sep 12 23:53:30.252215 systemd[1]: Started sshd@3-172.31.17.186:22-147.75.109.163:59020.service - OpenSSH per-connection server daemon (147.75.109.163:59020). Sep 12 23:53:30.416415 sshd[2282]: Accepted publickey for core from 147.75.109.163 port 59020 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:30.419054 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:30.427958 systemd-logind[1993]: New session 4 of user core. Sep 12 23:53:30.438014 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:53:30.563875 sshd[2282]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:30.569524 systemd[1]: sshd@3-172.31.17.186:22-147.75.109.163:59020.service: Deactivated successfully. Sep 12 23:53:30.570137 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:53:30.572296 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:53:30.577074 systemd-logind[1993]: Removed session 4. Sep 12 23:53:30.605224 systemd[1]: Started sshd@4-172.31.17.186:22-147.75.109.163:59028.service - OpenSSH per-connection server daemon (147.75.109.163:59028). Sep 12 23:53:30.775862 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 59028 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:30.778407 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:30.787893 systemd-logind[1993]: New session 5 of user core. Sep 12 23:53:30.794991 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:53:30.914275 sshd[2289]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:30.921375 systemd[1]: sshd@4-172.31.17.186:22-147.75.109.163:59028.service: Deactivated successfully. Sep 12 23:53:30.925605 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:53:30.926882 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:53:30.928461 systemd-logind[1993]: Removed session 5. Sep 12 23:53:30.947007 systemd[1]: Started sshd@5-172.31.17.186:22-147.75.109.163:59044.service - OpenSSH per-connection server daemon (147.75.109.163:59044). Sep 12 23:53:31.125952 sshd[2296]: Accepted publickey for core from 147.75.109.163 port 59044 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:31.128802 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:31.139150 systemd-logind[1993]: New session 6 of user core. Sep 12 23:53:31.151008 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:53:31.276955 sshd[2296]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:31.282210 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:53:31.283454 systemd[1]: sshd@5-172.31.17.186:22-147.75.109.163:59044.service: Deactivated successfully. Sep 12 23:53:31.287428 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:53:31.292116 systemd-logind[1993]: Removed session 6. Sep 12 23:53:31.319172 systemd[1]: Started sshd@6-172.31.17.186:22-147.75.109.163:59058.service - OpenSSH per-connection server daemon (147.75.109.163:59058). Sep 12 23:53:31.485703 sshd[2303]: Accepted publickey for core from 147.75.109.163 port 59058 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:31.488377 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:31.497858 systemd-logind[1993]: New session 7 of user core. Sep 12 23:53:31.504030 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:53:31.623112 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:53:31.623790 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:31.638709 sudo[2306]: pam_unix(sudo:session): session closed for user root Sep 12 23:53:31.662355 sshd[2303]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:31.667899 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:53:31.669531 systemd[1]: sshd@6-172.31.17.186:22-147.75.109.163:59058.service: Deactivated successfully. Sep 12 23:53:31.672629 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:53:31.676545 systemd-logind[1993]: Removed session 7. Sep 12 23:53:31.704251 systemd[1]: Started sshd@7-172.31.17.186:22-147.75.109.163:59068.service - OpenSSH per-connection server daemon (147.75.109.163:59068). Sep 12 23:53:31.877418 sshd[2311]: Accepted publickey for core from 147.75.109.163 port 59068 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:31.880246 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:31.890094 systemd-logind[1993]: New session 8 of user core. Sep 12 23:53:31.897078 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:53:32.002938 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:53:32.004117 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:32.010524 sudo[2315]: pam_unix(sudo:session): session closed for user root Sep 12 23:53:32.020463 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:53:32.021113 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:32.041807 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:53:32.058466 auditctl[2318]: No rules Sep 12 23:53:32.059263 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:53:32.059624 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:53:32.067939 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:53:32.116386 augenrules[2336]: No rules Sep 12 23:53:32.119313 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:53:32.123542 sudo[2314]: pam_unix(sudo:session): session closed for user root Sep 12 23:53:32.148055 sshd[2311]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:32.153516 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:53:32.154121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:53:32.155117 systemd[1]: sshd@7-172.31.17.186:22-147.75.109.163:59068.service: Deactivated successfully. Sep 12 23:53:32.157794 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:53:32.167153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:32.170438 systemd-logind[1993]: Removed session 8. Sep 12 23:53:32.188330 systemd[1]: Started sshd@8-172.31.17.186:22-147.75.109.163:59078.service - OpenSSH per-connection server daemon (147.75.109.163:59078). Sep 12 23:53:32.371300 sshd[2347]: Accepted publickey for core from 147.75.109.163 port 59078 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:32.374538 sshd[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:32.386299 systemd-logind[1993]: New session 9 of user core. Sep 12 23:53:32.398090 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:53:32.510159 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:53:32.510971 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:32.541169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:32.543452 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:53:32.640199 kubelet[2359]: E0912 23:53:32.640114 2359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:53:32.647510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:53:32.649087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:53:33.040283 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:53:33.043852 (dockerd)[2378]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:53:33.457030 dockerd[2378]: time="2025-09-12T23:53:33.456801880Z" level=info msg="Starting up" Sep 12 23:53:33.621852 systemd[1]: var-lib-docker-metacopy\x2dcheck3248074776-merged.mount: Deactivated successfully. Sep 12 23:53:33.641229 dockerd[2378]: time="2025-09-12T23:53:33.641122192Z" level=info msg="Loading containers: start." Sep 12 23:53:33.819776 kernel: Initializing XFRM netlink socket Sep 12 23:53:33.856019 (udev-worker)[2400]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:53:33.947804 systemd-networkd[1933]: docker0: Link UP Sep 12 23:53:33.978591 dockerd[2378]: time="2025-09-12T23:53:33.978514266Z" level=info msg="Loading containers: done." Sep 12 23:53:34.003687 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2521095406-merged.mount: Deactivated successfully. Sep 12 23:53:34.013955 dockerd[2378]: time="2025-09-12T23:53:34.013869158Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:53:34.014269 dockerd[2378]: time="2025-09-12T23:53:34.014035514Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:53:34.014352 dockerd[2378]: time="2025-09-12T23:53:34.014266850Z" level=info msg="Daemon has completed initialization" Sep 12 23:53:34.092753 dockerd[2378]: time="2025-09-12T23:53:34.092435043Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:53:34.095000 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:53:35.191311 containerd[2019]: time="2025-09-12T23:53:35.190883128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 23:53:35.862909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816533947.mount: Deactivated successfully. Sep 12 23:53:37.353927 containerd[2019]: time="2025-09-12T23:53:37.353829271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:37.357043 containerd[2019]: time="2025-09-12T23:53:37.356950375Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 12 23:53:37.362591 containerd[2019]: time="2025-09-12T23:53:37.362519119Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:37.369045 containerd[2019]: time="2025-09-12T23:53:37.368969875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:37.371562 containerd[2019]: time="2025-09-12T23:53:37.371499295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.180549291s" Sep 12 23:53:37.371834 containerd[2019]: time="2025-09-12T23:53:37.371796775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 23:53:37.376333 containerd[2019]: time="2025-09-12T23:53:37.376277803Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 23:53:38.799754 containerd[2019]: time="2025-09-12T23:53:38.799616446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:38.801757 containerd[2019]: time="2025-09-12T23:53:38.801678382Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 12 23:53:38.802749 containerd[2019]: time="2025-09-12T23:53:38.802165306Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:38.808750 containerd[2019]: time="2025-09-12T23:53:38.807883222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:38.810585 containerd[2019]: time="2025-09-12T23:53:38.810322678Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.433982523s" Sep 12 23:53:38.810585 containerd[2019]: time="2025-09-12T23:53:38.810384118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 23:53:38.811514 containerd[2019]: time="2025-09-12T23:53:38.811243366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 23:53:39.989774 containerd[2019]: time="2025-09-12T23:53:39.989062956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:39.991337 containerd[2019]: time="2025-09-12T23:53:39.991237440Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 12 23:53:39.992771 containerd[2019]: time="2025-09-12T23:53:39.992051028Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:39.998778 containerd[2019]: time="2025-09-12T23:53:39.998195868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:40.001439 containerd[2019]: time="2025-09-12T23:53:40.000772436Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.189452678s" Sep 12 23:53:40.001439 containerd[2019]: time="2025-09-12T23:53:40.000837824Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 23:53:40.001634 containerd[2019]: time="2025-09-12T23:53:40.001592684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 23:53:41.339194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925460866.mount: Deactivated successfully. Sep 12 23:53:41.915702 containerd[2019]: time="2025-09-12T23:53:41.914909642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:41.917651 containerd[2019]: time="2025-09-12T23:53:41.917244038Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 12 23:53:41.918828 containerd[2019]: time="2025-09-12T23:53:41.918753026Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:41.928786 containerd[2019]: time="2025-09-12T23:53:41.928416794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:41.930405 containerd[2019]: time="2025-09-12T23:53:41.930330878Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.92867805s" Sep 12 23:53:41.930786 containerd[2019]: time="2025-09-12T23:53:41.930593030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 23:53:41.932169 containerd[2019]: time="2025-09-12T23:53:41.932114486Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:53:42.486482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1561402249.mount: Deactivated successfully. Sep 12 23:53:42.898455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:53:42.906372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:43.321232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:43.326652 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:53:43.419418 kubelet[2620]: E0912 23:53:43.419328 2620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:53:43.424230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:53:43.425224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:53:43.980800 containerd[2019]: time="2025-09-12T23:53:43.980525116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:43.983263 containerd[2019]: time="2025-09-12T23:53:43.983193664Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 23:53:43.986074 containerd[2019]: time="2025-09-12T23:53:43.985968100Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:43.993357 containerd[2019]: time="2025-09-12T23:53:43.993249544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:43.997289 containerd[2019]: time="2025-09-12T23:53:43.996336400Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.06415463s" Sep 12 23:53:43.997289 containerd[2019]: time="2025-09-12T23:53:43.996415108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 23:53:43.997854 containerd[2019]: time="2025-09-12T23:53:43.997797412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:53:44.562609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3718364539.mount: Deactivated successfully. Sep 12 23:53:44.573564 containerd[2019]: time="2025-09-12T23:53:44.573467187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:44.575782 containerd[2019]: time="2025-09-12T23:53:44.575550759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 23:53:44.578301 containerd[2019]: time="2025-09-12T23:53:44.578205375Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:44.584064 containerd[2019]: time="2025-09-12T23:53:44.583940307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:44.586203 containerd[2019]: time="2025-09-12T23:53:44.585929823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 587.905935ms" Sep 12 23:53:44.586203 containerd[2019]: time="2025-09-12T23:53:44.586000887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:53:44.587244 containerd[2019]: time="2025-09-12T23:53:44.586942959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 23:53:45.176679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491317500.mount: Deactivated successfully. Sep 12 23:53:47.660777 containerd[2019]: time="2025-09-12T23:53:47.659837670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:47.662486 containerd[2019]: time="2025-09-12T23:53:47.662395638Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 12 23:53:47.664629 containerd[2019]: time="2025-09-12T23:53:47.664508922Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:47.674387 containerd[2019]: time="2025-09-12T23:53:47.673138794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:47.676194 containerd[2019]: time="2025-09-12T23:53:47.676106814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.089101359s" Sep 12 23:53:47.676194 containerd[2019]: time="2025-09-12T23:53:47.676182258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 23:53:48.151438 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 23:53:53.599646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:53:53.609196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:53.969274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:53.974181 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:53:54.054113 kubelet[2747]: E0912 23:53:54.054048 2747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:53:54.057668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:53:54.058051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:53:54.733063 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:54.750222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:54.805787 systemd[1]: Reloading requested from client PID 2761 ('systemctl') (unit session-9.scope)... Sep 12 23:53:54.805820 systemd[1]: Reloading... Sep 12 23:53:55.034023 zram_generator::config[2802]: No configuration found. Sep 12 23:53:55.281564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:53:55.453596 systemd[1]: Reloading finished in 647 ms. Sep 12 23:53:55.541056 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:53:55.541296 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:53:55.541872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:55.548336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:55.862590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:55.879254 (kubelet)[2863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:53:55.953769 kubelet[2863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:53:55.953769 kubelet[2863]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:53:55.953769 kubelet[2863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:53:55.953769 kubelet[2863]: I0912 23:53:55.953180 2863 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:53:57.099098 kubelet[2863]: I0912 23:53:57.098522 2863 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:53:57.099098 kubelet[2863]: I0912 23:53:57.098577 2863 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:53:57.099098 kubelet[2863]: I0912 23:53:57.099029 2863 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:53:57.240753 kubelet[2863]: I0912 23:53:57.240525 2863 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:53:57.244147 kubelet[2863]: E0912 23:53:57.244099 2863 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.266504 kubelet[2863]: E0912 23:53:57.266241 2863 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:53:57.266504 kubelet[2863]: I0912 23:53:57.266293 2863 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:53:57.279777 kubelet[2863]: I0912 23:53:57.279528 2863 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:53:57.280214 kubelet[2863]: I0912 23:53:57.280183 2863 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:53:57.280532 kubelet[2863]: I0912 23:53:57.280476 2863 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:53:57.280844 kubelet[2863]: I0912 23:53:57.280534 2863 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-186","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:53:57.281015 kubelet[2863]: I0912 23:53:57.280986 2863 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:53:57.281015 kubelet[2863]: I0912 23:53:57.281011 2863 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:53:57.281347 kubelet[2863]: I0912 23:53:57.281316 2863 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:53:57.316342 kubelet[2863]: I0912 23:53:57.315927 2863 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:53:57.316342 kubelet[2863]: I0912 23:53:57.315988 2863 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:53:57.316342 kubelet[2863]: I0912 23:53:57.316027 2863 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:53:57.316342 kubelet[2863]: I0912 23:53:57.316073 2863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:53:57.322997 kubelet[2863]: W0912 23:53:57.322756 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-186&limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:57.323163 kubelet[2863]: E0912 23:53:57.323019 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-186&limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.340501 kubelet[2863]: W0912 23:53:57.340395 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:57.340501 kubelet[2863]: E0912 23:53:57.340500 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.344886 kubelet[2863]: I0912 23:53:57.343657 2863 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:53:57.345137 kubelet[2863]: I0912 23:53:57.345109 2863 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:53:57.345552 kubelet[2863]: W0912 23:53:57.345527 2863 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:53:57.349074 kubelet[2863]: I0912 23:53:57.349033 2863 server.go:1274] "Started kubelet" Sep 12 23:53:57.355711 kubelet[2863]: I0912 23:53:57.355512 2863 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:53:57.358795 kubelet[2863]: I0912 23:53:57.358712 2863 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:53:57.363598 kubelet[2863]: I0912 23:53:57.363534 2863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:53:57.374582 kubelet[2863]: I0912 23:53:57.374505 2863 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:53:57.377509 kubelet[2863]: I0912 23:53:57.377443 2863 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:53:57.378375 kubelet[2863]: E0912 23:53:57.378310 2863 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-186\" not found" Sep 12 23:53:57.384755 kubelet[2863]: I0912 23:53:57.383616 2863 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:53:57.384755 kubelet[2863]: I0912 23:53:57.383763 2863 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:53:57.385938 kubelet[2863]: I0912 23:53:57.385095 2863 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:53:57.385938 kubelet[2863]: I0912 23:53:57.385497 2863 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:53:57.391089 kubelet[2863]: E0912 23:53:57.387918 2863 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.186:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.186:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-186.1864ae270e58a80a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-186,UID:ip-172-31-17-186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-186,},FirstTimestamp:2025-09-12 23:53:57.348984842 +0000 UTC m=+1.462616396,LastTimestamp:2025-09-12 23:53:57.348984842 +0000 UTC m=+1.462616396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-186,}" Sep 12 23:53:57.391509 kubelet[2863]: I0912 23:53:57.391455 2863 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:53:57.391890 kubelet[2863]: I0912 23:53:57.391840 2863 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:53:57.393532 kubelet[2863]: E0912 23:53:57.393085 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-186?timeout=10s\": dial tcp 172.31.17.186:6443: connect: connection refused" interval="200ms" Sep 12 23:53:57.399306 kubelet[2863]: W0912 23:53:57.399187 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:57.399306 kubelet[2863]: E0912 23:53:57.399299 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.400350 kubelet[2863]: I0912 23:53:57.400039 2863 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:53:57.402642 kubelet[2863]: E0912 23:53:57.402597 2863 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:53:57.428013 kubelet[2863]: I0912 23:53:57.427945 2863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:53:57.431009 kubelet[2863]: I0912 23:53:57.430949 2863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:53:57.431009 kubelet[2863]: I0912 23:53:57.431001 2863 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:53:57.431207 kubelet[2863]: I0912 23:53:57.431066 2863 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:53:57.431275 kubelet[2863]: E0912 23:53:57.431181 2863 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:53:57.441979 kubelet[2863]: W0912 23:53:57.440651 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:57.442107 kubelet[2863]: E0912 23:53:57.442010 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.467191 kubelet[2863]: I0912 23:53:57.467136 2863 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:53:57.467191 kubelet[2863]: I0912 23:53:57.467178 2863 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:53:57.467428 kubelet[2863]: I0912 23:53:57.467214 2863 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:53:57.479001 kubelet[2863]: E0912 23:53:57.478917 2863 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-186\" not found" Sep 12 23:53:57.490734 kubelet[2863]: I0912 23:53:57.490673 2863 policy_none.go:49] "None policy: Start" Sep 12 23:53:57.491988 kubelet[2863]: I0912 23:53:57.491951 2863 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:53:57.492133 kubelet[2863]: I0912 23:53:57.492000 2863 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:53:57.529320 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:53:57.531967 kubelet[2863]: E0912 23:53:57.531898 2863 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:53:57.547700 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:53:57.556566 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:53:57.564867 kubelet[2863]: I0912 23:53:57.564011 2863 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:53:57.564867 kubelet[2863]: I0912 23:53:57.564310 2863 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:53:57.564867 kubelet[2863]: I0912 23:53:57.564350 2863 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:53:57.564867 kubelet[2863]: I0912 23:53:57.564681 2863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:53:57.570059 kubelet[2863]: E0912 23:53:57.569984 2863 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-186\" not found" Sep 12 23:53:57.594588 kubelet[2863]: E0912 23:53:57.594499 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-186?timeout=10s\": dial tcp 172.31.17.186:6443: connect: connection refused" interval="400ms" Sep 12 23:53:57.667896 kubelet[2863]: I0912 23:53:57.667130 2863 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-186" Sep 12 23:53:57.670220 kubelet[2863]: E0912 23:53:57.670156 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.186:6443/api/v1/nodes\": dial tcp 172.31.17.186:6443: connect: connection refused" node="ip-172-31-17-186" Sep 12 23:53:57.752367 systemd[1]: Created slice kubepods-burstable-pod5e7e752434bb0dbb4a54893336d01860.slice - libcontainer container kubepods-burstable-pod5e7e752434bb0dbb4a54893336d01860.slice. Sep 12 23:53:57.778285 systemd[1]: Created slice kubepods-burstable-pod8fb34f898b4f1a58c209e4e1ea18518b.slice - libcontainer container kubepods-burstable-pod8fb34f898b4f1a58c209e4e1ea18518b.slice. Sep 12 23:53:57.785497 kubelet[2863]: I0912 23:53:57.785442 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:53:57.785614 kubelet[2863]: I0912 23:53:57.785507 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e7e752434bb0dbb4a54893336d01860-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-186\" (UID: \"5e7e752434bb0dbb4a54893336d01860\") " pod="kube-system/kube-apiserver-ip-172-31-17-186" Sep 12 23:53:57.785614 kubelet[2863]: I0912 23:53:57.785548 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:53:57.785614 kubelet[2863]: I0912 23:53:57.785594 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:53:57.785873 kubelet[2863]: I0912 23:53:57.785631 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:53:57.785873 kubelet[2863]: I0912 23:53:57.785687 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:53:57.785873 kubelet[2863]: I0912 23:53:57.785752 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fb34f898b4f1a58c209e4e1ea18518b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-186\" (UID: \"8fb34f898b4f1a58c209e4e1ea18518b\") " pod="kube-system/kube-scheduler-ip-172-31-17-186" Sep 12 23:53:57.785873 kubelet[2863]: I0912 23:53:57.785790 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e7e752434bb0dbb4a54893336d01860-ca-certs\") pod \"kube-apiserver-ip-172-31-17-186\" (UID: \"5e7e752434bb0dbb4a54893336d01860\") " pod="kube-system/kube-apiserver-ip-172-31-17-186" Sep 12 23:53:57.785873 kubelet[2863]: I0912 23:53:57.785829 2863 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e7e752434bb0dbb4a54893336d01860-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-186\" (UID: \"5e7e752434bb0dbb4a54893336d01860\") " pod="kube-system/kube-apiserver-ip-172-31-17-186" Sep 12 23:53:57.795832 systemd[1]: Created slice kubepods-burstable-pod4003726b6c7d17a20089c3b721018afb.slice - libcontainer container kubepods-burstable-pod4003726b6c7d17a20089c3b721018afb.slice. Sep 12 23:53:57.873261 kubelet[2863]: I0912 23:53:57.873202 2863 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-186" Sep 12 23:53:57.873807 kubelet[2863]: E0912 23:53:57.873714 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.186:6443/api/v1/nodes\": dial tcp 172.31.17.186:6443: connect: connection refused" node="ip-172-31-17-186" Sep 12 23:53:57.995385 kubelet[2863]: E0912 23:53:57.995246 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-186?timeout=10s\": dial tcp 172.31.17.186:6443: connect: connection refused" interval="800ms" Sep 12 23:53:58.072343 containerd[2019]: time="2025-09-12T23:53:58.071974982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-186,Uid:5e7e752434bb0dbb4a54893336d01860,Namespace:kube-system,Attempt:0,}" Sep 12 23:53:58.089639 containerd[2019]: time="2025-09-12T23:53:58.089563454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-186,Uid:8fb34f898b4f1a58c209e4e1ea18518b,Namespace:kube-system,Attempt:0,}" Sep 12 23:53:58.104736 containerd[2019]: time="2025-09-12T23:53:58.104388314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-186,Uid:4003726b6c7d17a20089c3b721018afb,Namespace:kube-system,Attempt:0,}" Sep 12 23:53:58.276569 kubelet[2863]: I0912 23:53:58.276432 2863 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-186" Sep 12 23:53:58.277179 kubelet[2863]: E0912 23:53:58.276980 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.186:6443/api/v1/nodes\": dial tcp 172.31.17.186:6443: connect: connection refused" node="ip-172-31-17-186" Sep 12 23:53:58.374802 kubelet[2863]: W0912 23:53:58.374690 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-186&limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:58.375015 kubelet[2863]: E0912 23:53:58.374822 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.186:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-186&limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:58.461583 kubelet[2863]: E0912 23:53:58.461340 2863 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.186:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.186:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-186.1864ae270e58a80a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-186,UID:ip-172-31-17-186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-186,},FirstTimestamp:2025-09-12 23:53:57.348984842 +0000 UTC m=+1.462616396,LastTimestamp:2025-09-12 23:53:57.348984842 +0000 UTC m=+1.462616396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-186,}" Sep 12 23:53:58.582818 kubelet[2863]: W0912 23:53:58.582609 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:58.582818 kubelet[2863]: E0912 23:53:58.582707 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.186:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:58.605478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258257635.mount: Deactivated successfully. Sep 12 23:53:58.622788 containerd[2019]: time="2025-09-12T23:53:58.622704377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:58.627148 kubelet[2863]: W0912 23:53:58.626935 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:58.627148 kubelet[2863]: E0912 23:53:58.627046 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.186:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:58.630759 containerd[2019]: time="2025-09-12T23:53:58.629156993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:58.631353 containerd[2019]: time="2025-09-12T23:53:58.631309601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 23:53:58.633021 containerd[2019]: time="2025-09-12T23:53:58.632966201Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:53:58.635939 containerd[2019]: time="2025-09-12T23:53:58.635885861Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:58.638487 containerd[2019]: time="2025-09-12T23:53:58.638435921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:58.639599 containerd[2019]: time="2025-09-12T23:53:58.639538937Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:53:58.644117 containerd[2019]: time="2025-09-12T23:53:58.644062853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:58.648213 containerd[2019]: time="2025-09-12T23:53:58.648158765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.655815ms" Sep 12 23:53:58.652115 containerd[2019]: time="2025-09-12T23:53:58.652035269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.946419ms" Sep 12 23:53:58.681346 containerd[2019]: time="2025-09-12T23:53:58.681284333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.581511ms" Sep 12 23:53:58.797734 kubelet[2863]: E0912 23:53:58.797480 2863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-186?timeout=10s\": dial tcp 172.31.17.186:6443: connect: connection refused" interval="1.6s" Sep 12 23:53:58.825634 containerd[2019]: time="2025-09-12T23:53:58.825098790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:53:58.825634 containerd[2019]: time="2025-09-12T23:53:58.825203766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:53:58.825634 containerd[2019]: time="2025-09-12T23:53:58.825233574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:58.825634 containerd[2019]: time="2025-09-12T23:53:58.825425322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:58.827637 kubelet[2863]: W0912 23:53:58.827362 2863 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.186:6443: connect: connection refused Sep 12 23:53:58.827637 kubelet[2863]: E0912 23:53:58.827444 2863 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.186:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:58.836137 containerd[2019]: time="2025-09-12T23:53:58.835875426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:53:58.836523 containerd[2019]: time="2025-09-12T23:53:58.836001654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:53:58.838105 containerd[2019]: time="2025-09-12T23:53:58.837449610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:53:58.840144 containerd[2019]: time="2025-09-12T23:53:58.839804046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:53:58.840144 containerd[2019]: time="2025-09-12T23:53:58.839852442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:58.840144 containerd[2019]: time="2025-09-12T23:53:58.840018858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:58.841102 containerd[2019]: time="2025-09-12T23:53:58.836195118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:58.841752 containerd[2019]: time="2025-09-12T23:53:58.841594650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:58.880260 systemd[1]: Started cri-containerd-a1d94fceb13fc881fd89b0a48b50700d4370b9cc1922cd2f89e58b366766df4e.scope - libcontainer container a1d94fceb13fc881fd89b0a48b50700d4370b9cc1922cd2f89e58b366766df4e. Sep 12 23:53:58.915184 systemd[1]: Started cri-containerd-0df9cc4c51db9d9e6b833ae0350b9cac10b943c227b0c2b07b3371250e14a34b.scope - libcontainer container 0df9cc4c51db9d9e6b833ae0350b9cac10b943c227b0c2b07b3371250e14a34b. Sep 12 23:53:58.928169 systemd[1]: Started cri-containerd-bef8b3fa9306259b9c7b025498b58785ec5e59966e74538fb9b0a1bf661fa0cd.scope - libcontainer container bef8b3fa9306259b9c7b025498b58785ec5e59966e74538fb9b0a1bf661fa0cd. Sep 12 23:53:59.034805 containerd[2019]: time="2025-09-12T23:53:59.032706531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-186,Uid:4003726b6c7d17a20089c3b721018afb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1d94fceb13fc881fd89b0a48b50700d4370b9cc1922cd2f89e58b366766df4e\"" Sep 12 23:53:59.046249 containerd[2019]: time="2025-09-12T23:53:59.046192695Z" level=info msg="CreateContainer within sandbox \"a1d94fceb13fc881fd89b0a48b50700d4370b9cc1922cd2f89e58b366766df4e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:53:59.051669 containerd[2019]: time="2025-09-12T23:53:59.051618519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-186,Uid:5e7e752434bb0dbb4a54893336d01860,Namespace:kube-system,Attempt:0,} returns sandbox id \"0df9cc4c51db9d9e6b833ae0350b9cac10b943c227b0c2b07b3371250e14a34b\"" Sep 12 23:53:59.059997 containerd[2019]: time="2025-09-12T23:53:59.059943015Z" level=info msg="CreateContainer within sandbox \"0df9cc4c51db9d9e6b833ae0350b9cac10b943c227b0c2b07b3371250e14a34b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:53:59.075193 containerd[2019]: time="2025-09-12T23:53:59.075124563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-186,Uid:8fb34f898b4f1a58c209e4e1ea18518b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bef8b3fa9306259b9c7b025498b58785ec5e59966e74538fb9b0a1bf661fa0cd\"" Sep 12 23:53:59.082277 kubelet[2863]: I0912 23:53:59.082228 2863 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-186" Sep 12 23:53:59.082712 containerd[2019]: time="2025-09-12T23:53:59.082655055Z" level=info msg="CreateContainer within sandbox \"a1d94fceb13fc881fd89b0a48b50700d4370b9cc1922cd2f89e58b366766df4e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a98a27114f07418d43920af33431053b86bc2384e182400018c1be4ca8257c77\"" Sep 12 23:53:59.083653 kubelet[2863]: E0912 23:53:59.083583 2863 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.186:6443/api/v1/nodes\": dial tcp 172.31.17.186:6443: connect: connection refused" node="ip-172-31-17-186" Sep 12 23:53:59.087156 containerd[2019]: time="2025-09-12T23:53:59.086913879Z" level=info msg="StartContainer for \"a98a27114f07418d43920af33431053b86bc2384e182400018c1be4ca8257c77\"" Sep 12 23:53:59.088623 containerd[2019]: time="2025-09-12T23:53:59.088530135Z" level=info msg="CreateContainer within sandbox \"bef8b3fa9306259b9c7b025498b58785ec5e59966e74538fb9b0a1bf661fa0cd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:53:59.117270 containerd[2019]: time="2025-09-12T23:53:59.117041259Z" level=info msg="CreateContainer within sandbox \"0df9cc4c51db9d9e6b833ae0350b9cac10b943c227b0c2b07b3371250e14a34b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3d985c68d5447d72e13239a6dcb8bd76b40356638343c1fe576da6ae22f7206d\"" Sep 12 23:53:59.118390 containerd[2019]: time="2025-09-12T23:53:59.118320747Z" level=info msg="StartContainer for \"3d985c68d5447d72e13239a6dcb8bd76b40356638343c1fe576da6ae22f7206d\"" Sep 12 23:53:59.145110 containerd[2019]: time="2025-09-12T23:53:59.144961119Z" level=info msg="CreateContainer within sandbox \"bef8b3fa9306259b9c7b025498b58785ec5e59966e74538fb9b0a1bf661fa0cd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f6f22436bbd88a0c382843388a39eeb641b64648a0fb0586e1eaf8429642442\"" Sep 12 23:53:59.147308 containerd[2019]: time="2025-09-12T23:53:59.147232059Z" level=info msg="StartContainer for \"2f6f22436bbd88a0c382843388a39eeb641b64648a0fb0586e1eaf8429642442\"" Sep 12 23:53:59.173510 systemd[1]: Started cri-containerd-a98a27114f07418d43920af33431053b86bc2384e182400018c1be4ca8257c77.scope - libcontainer container a98a27114f07418d43920af33431053b86bc2384e182400018c1be4ca8257c77. Sep 12 23:53:59.205076 systemd[1]: Started cri-containerd-3d985c68d5447d72e13239a6dcb8bd76b40356638343c1fe576da6ae22f7206d.scope - libcontainer container 3d985c68d5447d72e13239a6dcb8bd76b40356638343c1fe576da6ae22f7206d. Sep 12 23:53:59.262630 systemd[1]: Started cri-containerd-2f6f22436bbd88a0c382843388a39eeb641b64648a0fb0586e1eaf8429642442.scope - libcontainer container 2f6f22436bbd88a0c382843388a39eeb641b64648a0fb0586e1eaf8429642442. Sep 12 23:53:59.317163 containerd[2019]: time="2025-09-12T23:53:59.317083660Z" level=info msg="StartContainer for \"a98a27114f07418d43920af33431053b86bc2384e182400018c1be4ca8257c77\" returns successfully" Sep 12 23:53:59.346519 kubelet[2863]: E0912 23:53:59.346220 2863 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.186:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.186:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:59.371953 containerd[2019]: time="2025-09-12T23:53:59.370498672Z" level=info msg="StartContainer for \"3d985c68d5447d72e13239a6dcb8bd76b40356638343c1fe576da6ae22f7206d\" returns successfully" Sep 12 23:53:59.448599 containerd[2019]: time="2025-09-12T23:53:59.448503701Z" level=info msg="StartContainer for \"2f6f22436bbd88a0c382843388a39eeb641b64648a0fb0586e1eaf8429642442\" returns successfully" Sep 12 23:54:00.688200 kubelet[2863]: I0912 23:54:00.688125 2863 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-186" Sep 12 23:54:01.894775 update_engine[1994]: I20250912 23:54:01.892776 1994 update_attempter.cc:509] Updating boot flags... Sep 12 23:54:02.053810 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3151) Sep 12 23:54:02.530765 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3153) Sep 12 23:54:04.146375 kubelet[2863]: E0912 23:54:04.146311 2863 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-186\" not found" node="ip-172-31-17-186" Sep 12 23:54:04.234388 kubelet[2863]: I0912 23:54:04.233671 2863 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-186" Sep 12 23:54:04.234388 kubelet[2863]: E0912 23:54:04.233753 2863 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-186\": node \"ip-172-31-17-186\" not found" Sep 12 23:54:04.328755 kubelet[2863]: I0912 23:54:04.327786 2863 apiserver.go:52] "Watching apiserver" Sep 12 23:54:04.384187 kubelet[2863]: I0912 23:54:04.384095 2863 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:54:06.376977 systemd[1]: Reloading requested from client PID 3322 ('systemctl') (unit session-9.scope)... Sep 12 23:54:06.377016 systemd[1]: Reloading... Sep 12 23:54:06.672774 zram_generator::config[3374]: No configuration found. Sep 12 23:54:06.945991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:54:07.161528 systemd[1]: Reloading finished in 783 ms. Sep 12 23:54:07.247349 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:54:07.263569 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:54:07.264185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:54:07.264287 systemd[1]: kubelet.service: Consumed 2.088s CPU time, 130.7M memory peak, 0B memory swap peak. Sep 12 23:54:07.273279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:54:07.672338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:54:07.692326 (kubelet)[3422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:54:07.808609 kubelet[3422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:54:07.808609 kubelet[3422]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:54:07.808609 kubelet[3422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:54:07.808609 kubelet[3422]: I0912 23:54:07.808149 3422 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:54:07.828101 kubelet[3422]: I0912 23:54:07.828035 3422 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:54:07.828101 kubelet[3422]: I0912 23:54:07.828091 3422 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:54:07.828604 kubelet[3422]: I0912 23:54:07.828562 3422 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:54:07.834486 kubelet[3422]: I0912 23:54:07.833540 3422 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:54:07.839363 kubelet[3422]: I0912 23:54:07.839060 3422 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:54:07.854066 kubelet[3422]: E0912 23:54:07.853993 3422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:54:07.854066 kubelet[3422]: I0912 23:54:07.854060 3422 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:54:07.861632 kubelet[3422]: I0912 23:54:07.861374 3422 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:54:07.862299 kubelet[3422]: I0912 23:54:07.861939 3422 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:54:07.862299 kubelet[3422]: I0912 23:54:07.862195 3422 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:54:07.862554 kubelet[3422]: I0912 23:54:07.862252 3422 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-186","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:54:07.862711 kubelet[3422]: I0912 23:54:07.862568 3422 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:54:07.862711 kubelet[3422]: I0912 23:54:07.862592 3422 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:54:07.862711 kubelet[3422]: I0912 23:54:07.862661 3422 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:54:07.864866 kubelet[3422]: I0912 23:54:07.863376 3422 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:54:07.864866 kubelet[3422]: I0912 23:54:07.864826 3422 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:54:07.864866 kubelet[3422]: I0912 23:54:07.864872 3422 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:54:07.865129 kubelet[3422]: I0912 23:54:07.864911 3422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:54:07.869065 kubelet[3422]: I0912 23:54:07.867693 3422 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:54:07.869065 kubelet[3422]: I0912 23:54:07.869041 3422 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:54:07.883200 kubelet[3422]: I0912 23:54:07.883121 3422 server.go:1274] "Started kubelet" Sep 12 23:54:07.895594 sudo[3436]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 23:54:07.897590 kubelet[3422]: I0912 23:54:07.896516 3422 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:54:07.897306 sudo[3436]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 23:54:07.919070 kubelet[3422]: I0912 23:54:07.918999 3422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:54:07.925288 kubelet[3422]: I0912 23:54:07.896746 3422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:54:07.925288 kubelet[3422]: I0912 23:54:07.924826 3422 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:54:07.928981 kubelet[3422]: I0912 23:54:07.928918 3422 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:54:07.930646 kubelet[3422]: I0912 23:54:07.914338 3422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:54:07.948444 kubelet[3422]: I0912 23:54:07.948370 3422 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:54:07.950041 kubelet[3422]: E0912 23:54:07.949978 3422 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-186\" not found" Sep 12 23:54:07.950417 kubelet[3422]: I0912 23:54:07.950376 3422 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:54:07.953972 kubelet[3422]: I0912 23:54:07.953910 3422 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:54:07.982816 kubelet[3422]: I0912 23:54:07.977746 3422 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:54:07.982816 kubelet[3422]: I0912 23:54:07.979970 3422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:54:08.012256 kubelet[3422]: E0912 23:54:08.010809 3422 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:54:08.015870 kubelet[3422]: I0912 23:54:08.015121 3422 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:54:08.024625 kubelet[3422]: I0912 23:54:08.024530 3422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:54:08.039754 kubelet[3422]: I0912 23:54:08.038220 3422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:54:08.039754 kubelet[3422]: I0912 23:54:08.038409 3422 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:54:08.039754 kubelet[3422]: I0912 23:54:08.038604 3422 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:54:08.039754 kubelet[3422]: E0912 23:54:08.039551 3422 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:54:08.142683 kubelet[3422]: E0912 23:54:08.141905 3422 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:54:08.176257 kubelet[3422]: I0912 23:54:08.176128 3422 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:54:08.176257 kubelet[3422]: I0912 23:54:08.176163 3422 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:54:08.176257 kubelet[3422]: I0912 23:54:08.176201 3422 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:54:08.177554 kubelet[3422]: I0912 23:54:08.177487 3422 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:54:08.177554 kubelet[3422]: I0912 23:54:08.177533 3422 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:54:08.177945 kubelet[3422]: I0912 23:54:08.177573 3422 policy_none.go:49] "None policy: Start" Sep 12 23:54:08.179905 kubelet[3422]: I0912 23:54:08.179854 3422 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:54:08.179905 kubelet[3422]: I0912 23:54:08.179912 3422 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:54:08.180520 kubelet[3422]: I0912 23:54:08.180264 3422 state_mem.go:75] "Updated machine memory state" Sep 12 23:54:08.196352 kubelet[3422]: I0912 23:54:08.195944 3422 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:54:08.196895 kubelet[3422]: I0912 23:54:08.196869 3422 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:54:08.197108 kubelet[3422]: I0912 23:54:08.197056 3422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:54:08.198484 kubelet[3422]: I0912 23:54:08.198074 3422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:54:08.325911 kubelet[3422]: I0912 23:54:08.325838 3422 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-186" Sep 12 23:54:08.354302 kubelet[3422]: I0912 23:54:08.354238 3422 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-17-186" Sep 12 23:54:08.354437 kubelet[3422]: I0912 23:54:08.354371 3422 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-186" Sep 12 23:54:08.358030 kubelet[3422]: I0912 23:54:08.356401 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:54:08.358030 kubelet[3422]: I0912 23:54:08.356516 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fb34f898b4f1a58c209e4e1ea18518b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-186\" (UID: \"8fb34f898b4f1a58c209e4e1ea18518b\") " pod="kube-system/kube-scheduler-ip-172-31-17-186" Sep 12 23:54:08.358030 kubelet[3422]: I0912 23:54:08.356593 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e7e752434bb0dbb4a54893336d01860-ca-certs\") pod \"kube-apiserver-ip-172-31-17-186\" (UID: \"5e7e752434bb0dbb4a54893336d01860\") " pod="kube-system/kube-apiserver-ip-172-31-17-186" Sep 12 23:54:08.358030 kubelet[3422]: I0912 23:54:08.356706 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e7e752434bb0dbb4a54893336d01860-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-186\" (UID: \"5e7e752434bb0dbb4a54893336d01860\") " pod="kube-system/kube-apiserver-ip-172-31-17-186" Sep 12 23:54:08.358030 kubelet[3422]: I0912 23:54:08.356799 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:54:08.358511 kubelet[3422]: I0912 23:54:08.357151 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:54:08.358511 kubelet[3422]: I0912 23:54:08.357356 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:54:08.358511 kubelet[3422]: I0912 23:54:08.357560 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e7e752434bb0dbb4a54893336d01860-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-186\" (UID: \"5e7e752434bb0dbb4a54893336d01860\") " pod="kube-system/kube-apiserver-ip-172-31-17-186" Sep 12 23:54:08.358511 kubelet[3422]: I0912 23:54:08.357955 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4003726b6c7d17a20089c3b721018afb-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-186\" (UID: \"4003726b6c7d17a20089c3b721018afb\") " pod="kube-system/kube-controller-manager-ip-172-31-17-186" Sep 12 23:54:08.880543 kubelet[3422]: I0912 23:54:08.880462 3422 apiserver.go:52] "Watching apiserver" Sep 12 23:54:08.907912 sudo[3436]: pam_unix(sudo:session): session closed for user root Sep 12 23:54:08.950974 kubelet[3422]: I0912 23:54:08.950889 3422 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:54:09.187488 kubelet[3422]: I0912 23:54:09.187101 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-186" podStartSLOduration=1.187078393 podStartE2EDuration="1.187078393s" podCreationTimestamp="2025-09-12 23:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:09.160414069 +0000 UTC m=+1.456708328" watchObservedRunningTime="2025-09-12 23:54:09.187078393 +0000 UTC m=+1.483372652" Sep 12 23:54:09.210288 kubelet[3422]: I0912 23:54:09.209932 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-186" podStartSLOduration=1.209908189 podStartE2EDuration="1.209908189s" podCreationTimestamp="2025-09-12 23:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:09.189711985 +0000 UTC m=+1.486006256" watchObservedRunningTime="2025-09-12 23:54:09.209908189 +0000 UTC m=+1.506202424" Sep 12 23:54:09.927467 kubelet[3422]: I0912 23:54:09.927371 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-186" podStartSLOduration=1.927346553 podStartE2EDuration="1.927346553s" podCreationTimestamp="2025-09-12 23:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:09.212005933 +0000 UTC m=+1.508300192" watchObservedRunningTime="2025-09-12 23:54:09.927346553 +0000 UTC m=+2.223640800" Sep 12 23:54:11.171198 sudo[2352]: pam_unix(sudo:session): session closed for user root Sep 12 23:54:11.195986 sshd[2347]: pam_unix(sshd:session): session closed for user core Sep 12 23:54:11.203034 systemd[1]: sshd@8-172.31.17.186:22-147.75.109.163:59078.service: Deactivated successfully. Sep 12 23:54:11.203796 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:54:11.208290 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:54:11.208958 systemd[1]: session-9.scope: Consumed 10.416s CPU time, 149.1M memory peak, 0B memory swap peak. Sep 12 23:54:11.213439 systemd-logind[1993]: Removed session 9. Sep 12 23:54:11.885352 kubelet[3422]: I0912 23:54:11.885069 3422 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:54:11.886158 containerd[2019]: time="2025-09-12T23:54:11.885643698Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:54:11.889610 kubelet[3422]: I0912 23:54:11.887218 3422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:54:12.991774 kubelet[3422]: I0912 23:54:12.990304 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jpk\" (UniqueName: \"kubernetes.io/projected/2c51fdd8-ee96-4d90-a043-e0ab58fbb641-kube-api-access-68jpk\") pod \"kube-proxy-tcgdf\" (UID: \"2c51fdd8-ee96-4d90-a043-e0ab58fbb641\") " pod="kube-system/kube-proxy-tcgdf" Sep 12 23:54:12.991774 kubelet[3422]: I0912 23:54:12.990372 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c51fdd8-ee96-4d90-a043-e0ab58fbb641-lib-modules\") pod \"kube-proxy-tcgdf\" (UID: \"2c51fdd8-ee96-4d90-a043-e0ab58fbb641\") " pod="kube-system/kube-proxy-tcgdf" Sep 12 23:54:12.991774 kubelet[3422]: I0912 23:54:12.990415 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c51fdd8-ee96-4d90-a043-e0ab58fbb641-kube-proxy\") pod \"kube-proxy-tcgdf\" (UID: \"2c51fdd8-ee96-4d90-a043-e0ab58fbb641\") " pod="kube-system/kube-proxy-tcgdf" Sep 12 23:54:12.991774 kubelet[3422]: I0912 23:54:12.990449 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c51fdd8-ee96-4d90-a043-e0ab58fbb641-xtables-lock\") pod \"kube-proxy-tcgdf\" (UID: \"2c51fdd8-ee96-4d90-a043-e0ab58fbb641\") " pod="kube-system/kube-proxy-tcgdf" Sep 12 23:54:12.994122 systemd[1]: Created slice kubepods-besteffort-pod2c51fdd8_ee96_4d90_a043_e0ab58fbb641.slice - libcontainer container kubepods-besteffort-pod2c51fdd8_ee96_4d90_a043_e0ab58fbb641.slice. Sep 12 23:54:12.998120 kubelet[3422]: W0912 23:54:12.995612 3422 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-186" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-186' and this object Sep 12 23:54:12.998120 kubelet[3422]: E0912 23:54:12.995764 3422 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-17-186\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-186' and this object" logger="UnhandledError" Sep 12 23:54:12.998120 kubelet[3422]: W0912 23:54:12.995631 3422 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-186" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-186' and this object Sep 12 23:54:12.998120 kubelet[3422]: E0912 23:54:12.995881 3422 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-17-186\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-186' and this object" logger="UnhandledError" Sep 12 23:54:13.025380 systemd[1]: Created slice kubepods-burstable-poda643442b_876b_43e6_b967_8499aa9605e8.slice - libcontainer container kubepods-burstable-poda643442b_876b_43e6_b967_8499aa9605e8.slice. Sep 12 23:54:13.091097 kubelet[3422]: I0912 23:54:13.090683 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-run\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091097 kubelet[3422]: I0912 23:54:13.090816 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-kernel\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091097 kubelet[3422]: I0912 23:54:13.090861 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-hostproc\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091097 kubelet[3422]: I0912 23:54:13.090897 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-xtables-lock\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091097 kubelet[3422]: I0912 23:54:13.090936 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a643442b-876b-43e6-b967-8499aa9605e8-clustermesh-secrets\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091097 kubelet[3422]: I0912 23:54:13.090979 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a643442b-876b-43e6-b967-8499aa9605e8-cilium-config-path\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091536 kubelet[3422]: I0912 23:54:13.091035 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-bpf-maps\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091536 kubelet[3422]: I0912 23:54:13.091074 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-etc-cni-netd\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091536 kubelet[3422]: I0912 23:54:13.091117 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-net\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091536 kubelet[3422]: I0912 23:54:13.091181 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cni-path\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091536 kubelet[3422]: I0912 23:54:13.091247 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxx6\" (UniqueName: \"kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-kube-api-access-nqxx6\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091536 kubelet[3422]: I0912 23:54:13.091285 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-hubble-tls\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091915 kubelet[3422]: I0912 23:54:13.091330 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-cgroup\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.091915 kubelet[3422]: I0912 23:54:13.091381 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-lib-modules\") pod \"cilium-9tcc8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " pod="kube-system/cilium-9tcc8" Sep 12 23:54:13.106524 systemd[1]: Created slice kubepods-besteffort-podcff6c9f6_7166_41d7_b927_5ca900ff8d56.slice - libcontainer container kubepods-besteffort-podcff6c9f6_7166_41d7_b927_5ca900ff8d56.slice. Sep 12 23:54:13.192577 kubelet[3422]: I0912 23:54:13.192490 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j88bm\" (UniqueName: \"kubernetes.io/projected/cff6c9f6-7166-41d7-b927-5ca900ff8d56-kube-api-access-j88bm\") pod \"cilium-operator-5d85765b45-6ml49\" (UID: \"cff6c9f6-7166-41d7-b927-5ca900ff8d56\") " pod="kube-system/cilium-operator-5d85765b45-6ml49" Sep 12 23:54:13.193458 kubelet[3422]: I0912 23:54:13.192777 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff6c9f6-7166-41d7-b927-5ca900ff8d56-cilium-config-path\") pod \"cilium-operator-5d85765b45-6ml49\" (UID: \"cff6c9f6-7166-41d7-b927-5ca900ff8d56\") " pod="kube-system/cilium-operator-5d85765b45-6ml49" Sep 12 23:54:14.093112 kubelet[3422]: E0912 23:54:14.093043 3422 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 12 23:54:14.094064 kubelet[3422]: E0912 23:54:14.093189 3422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c51fdd8-ee96-4d90-a043-e0ab58fbb641-kube-proxy podName:2c51fdd8-ee96-4d90-a043-e0ab58fbb641 nodeName:}" failed. No retries permitted until 2025-09-12 23:54:14.593156125 +0000 UTC m=+6.889450360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2c51fdd8-ee96-4d90-a043-e0ab58fbb641-kube-proxy") pod "kube-proxy-tcgdf" (UID: "2c51fdd8-ee96-4d90-a043-e0ab58fbb641") : failed to sync configmap cache: timed out waiting for the condition Sep 12 23:54:14.242302 containerd[2019]: time="2025-09-12T23:54:14.242241990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tcc8,Uid:a643442b-876b-43e6-b967-8499aa9605e8,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:14.297665 containerd[2019]: time="2025-09-12T23:54:14.297454410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:14.297665 containerd[2019]: time="2025-09-12T23:54:14.297599526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:14.297665 containerd[2019]: time="2025-09-12T23:54:14.297652926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:14.298498 containerd[2019]: time="2025-09-12T23:54:14.297902694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:14.318703 containerd[2019]: time="2025-09-12T23:54:14.318169158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6ml49,Uid:cff6c9f6-7166-41d7-b927-5ca900ff8d56,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:14.349468 systemd[1]: Started cri-containerd-ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9.scope - libcontainer container ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9. Sep 12 23:54:14.403290 containerd[2019]: time="2025-09-12T23:54:14.400691575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:14.403290 containerd[2019]: time="2025-09-12T23:54:14.402202771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:14.403290 containerd[2019]: time="2025-09-12T23:54:14.402234955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:14.403290 containerd[2019]: time="2025-09-12T23:54:14.402618907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:14.454076 containerd[2019]: time="2025-09-12T23:54:14.454011283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tcc8,Uid:a643442b-876b-43e6-b967-8499aa9605e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\"" Sep 12 23:54:14.459118 systemd[1]: Started cri-containerd-20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012.scope - libcontainer container 20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012. Sep 12 23:54:14.462574 containerd[2019]: time="2025-09-12T23:54:14.461916799Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 23:54:14.536828 containerd[2019]: time="2025-09-12T23:54:14.536712656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6ml49,Uid:cff6c9f6-7166-41d7-b927-5ca900ff8d56,Namespace:kube-system,Attempt:0,} returns sandbox id \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\"" Sep 12 23:54:14.812151 containerd[2019]: time="2025-09-12T23:54:14.811756413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcgdf,Uid:2c51fdd8-ee96-4d90-a043-e0ab58fbb641,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:14.860970 containerd[2019]: time="2025-09-12T23:54:14.860670609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:14.860970 containerd[2019]: time="2025-09-12T23:54:14.860851593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:14.860970 containerd[2019]: time="2025-09-12T23:54:14.860928321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:14.861600 containerd[2019]: time="2025-09-12T23:54:14.861153669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:14.896086 systemd[1]: Started cri-containerd-a2c449780f0f8d365c6d5b79cef293f31cea864412d57889aa3839f13ede67c6.scope - libcontainer container a2c449780f0f8d365c6d5b79cef293f31cea864412d57889aa3839f13ede67c6. Sep 12 23:54:14.957526 containerd[2019]: time="2025-09-12T23:54:14.957350374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcgdf,Uid:2c51fdd8-ee96-4d90-a043-e0ab58fbb641,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2c449780f0f8d365c6d5b79cef293f31cea864412d57889aa3839f13ede67c6\"" Sep 12 23:54:14.965692 containerd[2019]: time="2025-09-12T23:54:14.965201578Z" level=info msg="CreateContainer within sandbox \"a2c449780f0f8d365c6d5b79cef293f31cea864412d57889aa3839f13ede67c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:54:14.999026 containerd[2019]: time="2025-09-12T23:54:14.998949034Z" level=info msg="CreateContainer within sandbox \"a2c449780f0f8d365c6d5b79cef293f31cea864412d57889aa3839f13ede67c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1856c6a8f3e47013dbd92908226abcb4d2d16293cd70942a525b2caa96415516\"" Sep 12 23:54:14.999898 containerd[2019]: time="2025-09-12T23:54:14.999844318Z" level=info msg="StartContainer for \"1856c6a8f3e47013dbd92908226abcb4d2d16293cd70942a525b2caa96415516\"" Sep 12 23:54:15.054066 systemd[1]: Started cri-containerd-1856c6a8f3e47013dbd92908226abcb4d2d16293cd70942a525b2caa96415516.scope - libcontainer container 1856c6a8f3e47013dbd92908226abcb4d2d16293cd70942a525b2caa96415516. Sep 12 23:54:15.141433 containerd[2019]: time="2025-09-12T23:54:15.141364687Z" level=info msg="StartContainer for \"1856c6a8f3e47013dbd92908226abcb4d2d16293cd70942a525b2caa96415516\" returns successfully" Sep 12 23:54:18.074920 kubelet[3422]: I0912 23:54:18.074809 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tcgdf" podStartSLOduration=6.074782809 podStartE2EDuration="6.074782809s" podCreationTimestamp="2025-09-12 23:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:16.189187112 +0000 UTC m=+8.485481395" watchObservedRunningTime="2025-09-12 23:54:18.074782809 +0000 UTC m=+10.371077104" Sep 12 23:54:19.987995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962775031.mount: Deactivated successfully. Sep 12 23:54:22.886898 containerd[2019]: time="2025-09-12T23:54:22.886800485Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:22.889793 containerd[2019]: time="2025-09-12T23:54:22.889359113Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 23:54:22.892468 containerd[2019]: time="2025-09-12T23:54:22.892351097Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:22.896618 containerd[2019]: time="2025-09-12T23:54:22.896491889Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.434492386s" Sep 12 23:54:22.897103 containerd[2019]: time="2025-09-12T23:54:22.896873249Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 23:54:22.900674 containerd[2019]: time="2025-09-12T23:54:22.900337133Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 23:54:22.904252 containerd[2019]: time="2025-09-12T23:54:22.904020989Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:54:22.937544 containerd[2019]: time="2025-09-12T23:54:22.937421993Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\"" Sep 12 23:54:22.939994 containerd[2019]: time="2025-09-12T23:54:22.938758781Z" level=info msg="StartContainer for \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\"" Sep 12 23:54:23.002102 systemd[1]: Started cri-containerd-d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809.scope - libcontainer container d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809. Sep 12 23:54:23.062021 containerd[2019]: time="2025-09-12T23:54:23.061023770Z" level=info msg="StartContainer for \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\" returns successfully" Sep 12 23:54:23.093161 systemd[1]: cri-containerd-d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809.scope: Deactivated successfully. Sep 12 23:54:23.925343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809-rootfs.mount: Deactivated successfully. Sep 12 23:54:24.065910 containerd[2019]: time="2025-09-12T23:54:24.065472819Z" level=info msg="shim disconnected" id=d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809 namespace=k8s.io Sep 12 23:54:24.065910 containerd[2019]: time="2025-09-12T23:54:24.065579787Z" level=warning msg="cleaning up after shim disconnected" id=d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809 namespace=k8s.io Sep 12 23:54:24.065910 containerd[2019]: time="2025-09-12T23:54:24.065602659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:54:24.188620 containerd[2019]: time="2025-09-12T23:54:24.186838479Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:54:24.220193 containerd[2019]: time="2025-09-12T23:54:24.220097992Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\"" Sep 12 23:54:24.231618 containerd[2019]: time="2025-09-12T23:54:24.221996788Z" level=info msg="StartContainer for \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\"" Sep 12 23:54:24.224209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202741289.mount: Deactivated successfully. Sep 12 23:54:24.304090 systemd[1]: Started cri-containerd-98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf.scope - libcontainer container 98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf. Sep 12 23:54:24.355362 containerd[2019]: time="2025-09-12T23:54:24.355280224Z" level=info msg="StartContainer for \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\" returns successfully" Sep 12 23:54:24.391303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:54:24.393072 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:54:24.393828 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:54:24.405247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:54:24.407292 systemd[1]: cri-containerd-98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf.scope: Deactivated successfully. Sep 12 23:54:24.456318 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:54:24.476783 containerd[2019]: time="2025-09-12T23:54:24.476455973Z" level=info msg="shim disconnected" id=98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf namespace=k8s.io Sep 12 23:54:24.476783 containerd[2019]: time="2025-09-12T23:54:24.476532221Z" level=warning msg="cleaning up after shim disconnected" id=98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf namespace=k8s.io Sep 12 23:54:24.476783 containerd[2019]: time="2025-09-12T23:54:24.476552597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:54:24.930787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf-rootfs.mount: Deactivated successfully. Sep 12 23:54:25.201143 containerd[2019]: time="2025-09-12T23:54:25.200796965Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:54:25.247988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560880643.mount: Deactivated successfully. Sep 12 23:54:25.257873 containerd[2019]: time="2025-09-12T23:54:25.257490473Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\"" Sep 12 23:54:25.259266 containerd[2019]: time="2025-09-12T23:54:25.259180553Z" level=info msg="StartContainer for \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\"" Sep 12 23:54:25.366077 systemd[1]: Started cri-containerd-b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6.scope - libcontainer container b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6. Sep 12 23:54:25.486438 containerd[2019]: time="2025-09-12T23:54:25.486194262Z" level=info msg="StartContainer for \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\" returns successfully" Sep 12 23:54:25.488535 systemd[1]: cri-containerd-b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6.scope: Deactivated successfully. Sep 12 23:54:25.623258 containerd[2019]: time="2025-09-12T23:54:25.623173255Z" level=info msg="shim disconnected" id=b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6 namespace=k8s.io Sep 12 23:54:25.624415 containerd[2019]: time="2025-09-12T23:54:25.624356719Z" level=warning msg="cleaning up after shim disconnected" id=b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6 namespace=k8s.io Sep 12 23:54:25.624658 containerd[2019]: time="2025-09-12T23:54:25.624618847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:54:25.657232 containerd[2019]: time="2025-09-12T23:54:25.657171763Z" level=warning msg="cleanup warnings time=\"2025-09-12T23:54:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 23:54:25.757223 containerd[2019]: time="2025-09-12T23:54:25.757014655Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:25.760335 containerd[2019]: time="2025-09-12T23:54:25.760255003Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 23:54:25.764040 containerd[2019]: time="2025-09-12T23:54:25.763976947Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:25.769456 containerd[2019]: time="2025-09-12T23:54:25.769377883Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.868968726s" Sep 12 23:54:25.769780 containerd[2019]: time="2025-09-12T23:54:25.769683391Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 23:54:25.777329 containerd[2019]: time="2025-09-12T23:54:25.777239215Z" level=info msg="CreateContainer within sandbox \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 23:54:25.806113 containerd[2019]: time="2025-09-12T23:54:25.805889696Z" level=info msg="CreateContainer within sandbox \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\"" Sep 12 23:54:25.807464 containerd[2019]: time="2025-09-12T23:54:25.807350084Z" level=info msg="StartContainer for \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\"" Sep 12 23:54:25.864122 systemd[1]: Started cri-containerd-bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1.scope - libcontainer container bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1. Sep 12 23:54:25.931276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6-rootfs.mount: Deactivated successfully. Sep 12 23:54:25.939318 containerd[2019]: time="2025-09-12T23:54:25.938863184Z" level=info msg="StartContainer for \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\" returns successfully" Sep 12 23:54:26.222344 containerd[2019]: time="2025-09-12T23:54:26.222230502Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:54:26.270615 containerd[2019]: time="2025-09-12T23:54:26.270502914Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\"" Sep 12 23:54:26.272538 containerd[2019]: time="2025-09-12T23:54:26.272416998Z" level=info msg="StartContainer for \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\"" Sep 12 23:54:26.360195 kubelet[3422]: I0912 23:54:26.359786 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6ml49" podStartSLOduration=2.126676963 podStartE2EDuration="13.359712594s" podCreationTimestamp="2025-09-12 23:54:13 +0000 UTC" firstStartedPulling="2025-09-12 23:54:14.539709536 +0000 UTC m=+6.836003771" lastFinishedPulling="2025-09-12 23:54:25.772745155 +0000 UTC m=+18.069039402" observedRunningTime="2025-09-12 23:54:26.249057534 +0000 UTC m=+18.545351793" watchObservedRunningTime="2025-09-12 23:54:26.359712594 +0000 UTC m=+18.656006829" Sep 12 23:54:26.393111 systemd[1]: Started cri-containerd-53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19.scope - libcontainer container 53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19. Sep 12 23:54:26.498587 systemd[1]: cri-containerd-53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19.scope: Deactivated successfully. Sep 12 23:54:26.509884 containerd[2019]: time="2025-09-12T23:54:26.509800255Z" level=info msg="StartContainer for \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\" returns successfully" Sep 12 23:54:26.601069 containerd[2019]: time="2025-09-12T23:54:26.600916099Z" level=info msg="shim disconnected" id=53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19 namespace=k8s.io Sep 12 23:54:26.601069 containerd[2019]: time="2025-09-12T23:54:26.601067107Z" level=warning msg="cleaning up after shim disconnected" id=53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19 namespace=k8s.io Sep 12 23:54:26.601826 containerd[2019]: time="2025-09-12T23:54:26.601092607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:54:26.927112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19-rootfs.mount: Deactivated successfully. Sep 12 23:54:27.233758 containerd[2019]: time="2025-09-12T23:54:27.233546227Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:54:27.279452 containerd[2019]: time="2025-09-12T23:54:27.278761291Z" level=info msg="CreateContainer within sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\"" Sep 12 23:54:27.281787 containerd[2019]: time="2025-09-12T23:54:27.279955951Z" level=info msg="StartContainer for \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\"" Sep 12 23:54:27.383165 systemd[1]: run-containerd-runc-k8s.io-45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd-runc.AZ7ceE.mount: Deactivated successfully. Sep 12 23:54:27.404324 systemd[1]: Started cri-containerd-45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd.scope - libcontainer container 45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd. Sep 12 23:54:27.578304 containerd[2019]: time="2025-09-12T23:54:27.578097560Z" level=info msg="StartContainer for \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\" returns successfully" Sep 12 23:54:28.038496 kubelet[3422]: I0912 23:54:28.038404 3422 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 23:54:28.185860 systemd[1]: Created slice kubepods-burstable-podae1b9aa6_181c_4118_826b_ac17ddd0808f.slice - libcontainer container kubepods-burstable-podae1b9aa6_181c_4118_826b_ac17ddd0808f.slice. Sep 12 23:54:28.204688 systemd[1]: Created slice kubepods-burstable-pod005943a5_f657_43a6_868c_d326e29fdc8a.slice - libcontainer container kubepods-burstable-pod005943a5_f657_43a6_868c_d326e29fdc8a.slice. Sep 12 23:54:28.221262 kubelet[3422]: I0912 23:54:28.220956 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdnd\" (UniqueName: \"kubernetes.io/projected/005943a5-f657-43a6-868c-d326e29fdc8a-kube-api-access-4tdnd\") pod \"coredns-7c65d6cfc9-xclnz\" (UID: \"005943a5-f657-43a6-868c-d326e29fdc8a\") " pod="kube-system/coredns-7c65d6cfc9-xclnz" Sep 12 23:54:28.221262 kubelet[3422]: I0912 23:54:28.221035 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4f8q\" (UniqueName: \"kubernetes.io/projected/ae1b9aa6-181c-4118-826b-ac17ddd0808f-kube-api-access-m4f8q\") pod \"coredns-7c65d6cfc9-7ghh2\" (UID: \"ae1b9aa6-181c-4118-826b-ac17ddd0808f\") " pod="kube-system/coredns-7c65d6cfc9-7ghh2" Sep 12 23:54:28.221262 kubelet[3422]: I0912 23:54:28.221078 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae1b9aa6-181c-4118-826b-ac17ddd0808f-config-volume\") pod \"coredns-7c65d6cfc9-7ghh2\" (UID: \"ae1b9aa6-181c-4118-826b-ac17ddd0808f\") " pod="kube-system/coredns-7c65d6cfc9-7ghh2" Sep 12 23:54:28.221262 kubelet[3422]: I0912 23:54:28.221126 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/005943a5-f657-43a6-868c-d326e29fdc8a-config-volume\") pod \"coredns-7c65d6cfc9-xclnz\" (UID: \"005943a5-f657-43a6-868c-d326e29fdc8a\") " pod="kube-system/coredns-7c65d6cfc9-xclnz" Sep 12 23:54:28.496358 containerd[2019]: time="2025-09-12T23:54:28.496293321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7ghh2,Uid:ae1b9aa6-181c-4118-826b-ac17ddd0808f,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:28.521708 containerd[2019]: time="2025-09-12T23:54:28.521544561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xclnz,Uid:005943a5-f657-43a6-868c-d326e29fdc8a,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:31.306136 (udev-worker)[4224]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:54:31.307267 systemd-networkd[1933]: cilium_host: Link UP Sep 12 23:54:31.308684 systemd-networkd[1933]: cilium_net: Link UP Sep 12 23:54:31.309241 systemd-networkd[1933]: cilium_net: Gained carrier Sep 12 23:54:31.309633 systemd-networkd[1933]: cilium_host: Gained carrier Sep 12 23:54:31.309891 (udev-worker)[4222]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:54:31.517930 systemd-networkd[1933]: cilium_vxlan: Link UP Sep 12 23:54:31.517945 systemd-networkd[1933]: cilium_vxlan: Gained carrier Sep 12 23:54:31.865025 systemd-networkd[1933]: cilium_net: Gained IPv6LL Sep 12 23:54:32.106849 kernel: NET: Registered PF_ALG protocol family Sep 12 23:54:32.313040 systemd-networkd[1933]: cilium_host: Gained IPv6LL Sep 12 23:54:32.889588 systemd-networkd[1933]: cilium_vxlan: Gained IPv6LL Sep 12 23:54:33.571335 (udev-worker)[4266]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:54:33.586163 systemd-networkd[1933]: lxc_health: Link UP Sep 12 23:54:33.596973 systemd-networkd[1933]: lxc_health: Gained carrier Sep 12 23:54:34.145285 systemd-networkd[1933]: lxc5104641c160a: Link UP Sep 12 23:54:34.157710 (udev-worker)[4265]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:54:34.168990 kernel: eth0: renamed from tmp13359 Sep 12 23:54:34.173035 systemd-networkd[1933]: lxc2d3683066001: Link UP Sep 12 23:54:34.179939 kernel: eth0: renamed from tmp5fac1 Sep 12 23:54:34.186755 systemd-networkd[1933]: lxc5104641c160a: Gained carrier Sep 12 23:54:34.192676 systemd-networkd[1933]: lxc2d3683066001: Gained carrier Sep 12 23:54:34.327172 kubelet[3422]: I0912 23:54:34.327066 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9tcc8" podStartSLOduration=13.885622376 podStartE2EDuration="22.327042374s" podCreationTimestamp="2025-09-12 23:54:12 +0000 UTC" firstStartedPulling="2025-09-12 23:54:14.457690483 +0000 UTC m=+6.753984718" lastFinishedPulling="2025-09-12 23:54:22.899110493 +0000 UTC m=+15.195404716" observedRunningTime="2025-09-12 23:54:28.468226353 +0000 UTC m=+20.764520612" watchObservedRunningTime="2025-09-12 23:54:34.327042374 +0000 UTC m=+26.623336693" Sep 12 23:54:34.680938 systemd-networkd[1933]: lxc_health: Gained IPv6LL Sep 12 23:54:35.960943 systemd-networkd[1933]: lxc2d3683066001: Gained IPv6LL Sep 12 23:54:36.152939 systemd-networkd[1933]: lxc5104641c160a: Gained IPv6LL Sep 12 23:54:38.789074 ntpd[1987]: Listen normally on 7 cilium_host 192.168.0.40:123 Sep 12 23:54:38.790310 ntpd[1987]: 12 Sep 23:54:38 ntpd[1987]: Listen normally on 7 cilium_host 192.168.0.40:123 Sep 12 23:54:38.790310 ntpd[1987]: 12 Sep 23:54:38 ntpd[1987]: Listen normally on 8 cilium_net [fe80::a8f1:1fff:fec8:9d53%4]:123 Sep 12 23:54:38.790310 ntpd[1987]: 12 Sep 23:54:38 ntpd[1987]: Listen normally on 9 cilium_host [fe80::dc0b:43ff:fe3c:e61f%5]:123 Sep 12 23:54:38.790310 ntpd[1987]: 12 Sep 23:54:38 ntpd[1987]: Listen normally on 10 cilium_vxlan [fe80::e082:eaff:fe9e:7c5d%6]:123 Sep 12 23:54:38.790310 ntpd[1987]: 12 Sep 23:54:38 ntpd[1987]: Listen normally on 11 lxc_health [fe80::8899:bdff:fea4:f208%8]:123 Sep 12 23:54:38.790310 ntpd[1987]: 12 Sep 23:54:38 ntpd[1987]: Listen normally on 12 lxc5104641c160a [fe80::d84c:11ff:fea1:273f%10]:123 Sep 12 23:54:38.790310 ntpd[1987]: 12 Sep 23:54:38 ntpd[1987]: Listen normally on 13 lxc2d3683066001 [fe80::e05e:b6ff:fef8:7fa3%12]:123 Sep 12 23:54:38.789220 ntpd[1987]: Listen normally on 8 cilium_net [fe80::a8f1:1fff:fec8:9d53%4]:123 Sep 12 23:54:38.789309 ntpd[1987]: Listen normally on 9 cilium_host [fe80::dc0b:43ff:fe3c:e61f%5]:123 Sep 12 23:54:38.789393 ntpd[1987]: Listen normally on 10 cilium_vxlan [fe80::e082:eaff:fe9e:7c5d%6]:123 Sep 12 23:54:38.789466 ntpd[1987]: Listen normally on 11 lxc_health [fe80::8899:bdff:fea4:f208%8]:123 Sep 12 23:54:38.789563 ntpd[1987]: Listen normally on 12 lxc5104641c160a [fe80::d84c:11ff:fea1:273f%10]:123 Sep 12 23:54:38.789637 ntpd[1987]: Listen normally on 13 lxc2d3683066001 [fe80::e05e:b6ff:fef8:7fa3%12]:123 Sep 12 23:54:43.737765 containerd[2019]: time="2025-09-12T23:54:43.736630513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:43.737765 containerd[2019]: time="2025-09-12T23:54:43.736922605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:43.737765 containerd[2019]: time="2025-09-12T23:54:43.737002453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:43.737765 containerd[2019]: time="2025-09-12T23:54:43.737226373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:43.811238 systemd[1]: Started cri-containerd-5fac1ed21d6865914097de94c087a9187ea4855cec26d9ceacb2f02f105bd81a.scope - libcontainer container 5fac1ed21d6865914097de94c087a9187ea4855cec26d9ceacb2f02f105bd81a. Sep 12 23:54:43.840293 containerd[2019]: time="2025-09-12T23:54:43.839939641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:43.840293 containerd[2019]: time="2025-09-12T23:54:43.840067801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:43.840293 containerd[2019]: time="2025-09-12T23:54:43.840107725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:43.843843 containerd[2019]: time="2025-09-12T23:54:43.841963069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:43.910160 systemd[1]: Started cri-containerd-13359248b610242e10bbdf31084ad8274d6149374a31e792ee2ce6eccf3582f8.scope - libcontainer container 13359248b610242e10bbdf31084ad8274d6149374a31e792ee2ce6eccf3582f8. Sep 12 23:54:43.984388 containerd[2019]: time="2025-09-12T23:54:43.984253358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7ghh2,Uid:ae1b9aa6-181c-4118-826b-ac17ddd0808f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fac1ed21d6865914097de94c087a9187ea4855cec26d9ceacb2f02f105bd81a\"" Sep 12 23:54:43.999794 containerd[2019]: time="2025-09-12T23:54:43.999237962Z" level=info msg="CreateContainer within sandbox \"5fac1ed21d6865914097de94c087a9187ea4855cec26d9ceacb2f02f105bd81a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:54:44.032938 containerd[2019]: time="2025-09-12T23:54:44.032846746Z" level=info msg="CreateContainer within sandbox \"5fac1ed21d6865914097de94c087a9187ea4855cec26d9ceacb2f02f105bd81a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"538460d3d26f7ff3674fbb012f7c0b6fbc1ebf7d43c8cfcc8a46b35717f64eb1\"" Sep 12 23:54:44.036073 containerd[2019]: time="2025-09-12T23:54:44.036002386Z" level=info msg="StartContainer for \"538460d3d26f7ff3674fbb012f7c0b6fbc1ebf7d43c8cfcc8a46b35717f64eb1\"" Sep 12 23:54:44.083415 containerd[2019]: time="2025-09-12T23:54:44.083240098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xclnz,Uid:005943a5-f657-43a6-868c-d326e29fdc8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"13359248b610242e10bbdf31084ad8274d6149374a31e792ee2ce6eccf3582f8\"" Sep 12 23:54:44.100948 containerd[2019]: time="2025-09-12T23:54:44.100696702Z" level=info msg="CreateContainer within sandbox \"13359248b610242e10bbdf31084ad8274d6149374a31e792ee2ce6eccf3582f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:54:44.136076 systemd[1]: Started cri-containerd-538460d3d26f7ff3674fbb012f7c0b6fbc1ebf7d43c8cfcc8a46b35717f64eb1.scope - libcontainer container 538460d3d26f7ff3674fbb012f7c0b6fbc1ebf7d43c8cfcc8a46b35717f64eb1. Sep 12 23:54:44.158230 containerd[2019]: time="2025-09-12T23:54:44.158144675Z" level=info msg="CreateContainer within sandbox \"13359248b610242e10bbdf31084ad8274d6149374a31e792ee2ce6eccf3582f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf7a5c05a097fe2b41fa5260d61d1bf75a20ec260a8b0e79988e357e67f40d10\"" Sep 12 23:54:44.159384 containerd[2019]: time="2025-09-12T23:54:44.159328331Z" level=info msg="StartContainer for \"cf7a5c05a097fe2b41fa5260d61d1bf75a20ec260a8b0e79988e357e67f40d10\"" Sep 12 23:54:44.241751 containerd[2019]: time="2025-09-12T23:54:44.240777023Z" level=info msg="StartContainer for \"538460d3d26f7ff3674fbb012f7c0b6fbc1ebf7d43c8cfcc8a46b35717f64eb1\" returns successfully" Sep 12 23:54:44.269044 systemd[1]: Started cri-containerd-cf7a5c05a097fe2b41fa5260d61d1bf75a20ec260a8b0e79988e357e67f40d10.scope - libcontainer container cf7a5c05a097fe2b41fa5260d61d1bf75a20ec260a8b0e79988e357e67f40d10. Sep 12 23:54:44.366596 containerd[2019]: time="2025-09-12T23:54:44.366511704Z" level=info msg="StartContainer for \"cf7a5c05a097fe2b41fa5260d61d1bf75a20ec260a8b0e79988e357e67f40d10\" returns successfully" Sep 12 23:54:44.391617 kubelet[3422]: I0912 23:54:44.391426 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7ghh2" podStartSLOduration=31.391401468 podStartE2EDuration="31.391401468s" podCreationTimestamp="2025-09-12 23:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:44.390969204 +0000 UTC m=+36.687263451" watchObservedRunningTime="2025-09-12 23:54:44.391401468 +0000 UTC m=+36.687695715" Sep 12 23:54:45.376335 kubelet[3422]: I0912 23:54:45.376008 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xclnz" podStartSLOduration=32.375977845 podStartE2EDuration="32.375977845s" podCreationTimestamp="2025-09-12 23:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:45.374986873 +0000 UTC m=+37.671281216" watchObservedRunningTime="2025-09-12 23:54:45.375977845 +0000 UTC m=+37.672272104" Sep 12 23:54:53.279249 systemd[1]: Started sshd@9-172.31.17.186:22-147.75.109.163:48510.service - OpenSSH per-connection server daemon (147.75.109.163:48510). Sep 12 23:54:53.462309 sshd[4806]: Accepted publickey for core from 147.75.109.163 port 48510 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:54:53.465164 sshd[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:54:53.472646 systemd-logind[1993]: New session 10 of user core. Sep 12 23:54:53.480010 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:54:53.780338 sshd[4806]: pam_unix(sshd:session): session closed for user core Sep 12 23:54:53.786778 systemd[1]: sshd@9-172.31.17.186:22-147.75.109.163:48510.service: Deactivated successfully. Sep 12 23:54:53.790492 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:54:53.795044 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:54:53.797453 systemd-logind[1993]: Removed session 10. Sep 12 23:54:58.822251 systemd[1]: Started sshd@10-172.31.17.186:22-147.75.109.163:48514.service - OpenSSH per-connection server daemon (147.75.109.163:48514). Sep 12 23:54:59.007189 sshd[4821]: Accepted publickey for core from 147.75.109.163 port 48514 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:54:59.009941 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:54:59.018654 systemd-logind[1993]: New session 11 of user core. Sep 12 23:54:59.027020 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:54:59.280422 sshd[4821]: pam_unix(sshd:session): session closed for user core Sep 12 23:54:59.287347 systemd[1]: sshd@10-172.31.17.186:22-147.75.109.163:48514.service: Deactivated successfully. Sep 12 23:54:59.292506 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:54:59.294446 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:54:59.296418 systemd-logind[1993]: Removed session 11. Sep 12 23:55:04.323262 systemd[1]: Started sshd@11-172.31.17.186:22-147.75.109.163:45160.service - OpenSSH per-connection server daemon (147.75.109.163:45160). Sep 12 23:55:04.498150 sshd[4835]: Accepted publickey for core from 147.75.109.163 port 45160 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:04.500787 sshd[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:04.508304 systemd-logind[1993]: New session 12 of user core. Sep 12 23:55:04.516031 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:55:04.753775 sshd[4835]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:04.758874 systemd[1]: sshd@11-172.31.17.186:22-147.75.109.163:45160.service: Deactivated successfully. Sep 12 23:55:04.763286 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:55:04.768338 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:55:04.770242 systemd-logind[1993]: Removed session 12. Sep 12 23:55:09.796255 systemd[1]: Started sshd@12-172.31.17.186:22-147.75.109.163:45172.service - OpenSSH per-connection server daemon (147.75.109.163:45172). Sep 12 23:55:09.977801 sshd[4851]: Accepted publickey for core from 147.75.109.163 port 45172 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:09.980269 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:09.989541 systemd-logind[1993]: New session 13 of user core. Sep 12 23:55:09.998081 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:55:10.239482 sshd[4851]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:10.245930 systemd[1]: sshd@12-172.31.17.186:22-147.75.109.163:45172.service: Deactivated successfully. Sep 12 23:55:10.249584 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:55:10.251711 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:55:10.254026 systemd-logind[1993]: Removed session 13. Sep 12 23:55:10.281255 systemd[1]: Started sshd@13-172.31.17.186:22-147.75.109.163:55846.service - OpenSSH per-connection server daemon (147.75.109.163:55846). Sep 12 23:55:10.454956 sshd[4865]: Accepted publickey for core from 147.75.109.163 port 55846 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:10.457702 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:10.467362 systemd-logind[1993]: New session 14 of user core. Sep 12 23:55:10.471281 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:55:10.804388 sshd[4865]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:10.814511 systemd[1]: sshd@13-172.31.17.186:22-147.75.109.163:55846.service: Deactivated successfully. Sep 12 23:55:10.821679 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:55:10.828967 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:55:10.855217 systemd[1]: Started sshd@14-172.31.17.186:22-147.75.109.163:55858.service - OpenSSH per-connection server daemon (147.75.109.163:55858). Sep 12 23:55:10.860157 systemd-logind[1993]: Removed session 14. Sep 12 23:55:11.047436 sshd[4876]: Accepted publickey for core from 147.75.109.163 port 55858 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:11.050210 sshd[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:11.059328 systemd-logind[1993]: New session 15 of user core. Sep 12 23:55:11.069105 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:55:11.305916 sshd[4876]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:11.311709 systemd[1]: sshd@14-172.31.17.186:22-147.75.109.163:55858.service: Deactivated successfully. Sep 12 23:55:11.312374 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:55:11.318230 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:55:11.322521 systemd-logind[1993]: Removed session 15. Sep 12 23:55:16.349296 systemd[1]: Started sshd@15-172.31.17.186:22-147.75.109.163:55860.service - OpenSSH per-connection server daemon (147.75.109.163:55860). Sep 12 23:55:16.512684 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 55860 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:16.515952 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:16.524683 systemd-logind[1993]: New session 16 of user core. Sep 12 23:55:16.530052 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:55:16.768409 sshd[4892]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:16.774986 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:55:16.776565 systemd[1]: sshd@15-172.31.17.186:22-147.75.109.163:55860.service: Deactivated successfully. Sep 12 23:55:16.780587 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:55:16.782507 systemd-logind[1993]: Removed session 16. Sep 12 23:55:21.812266 systemd[1]: Started sshd@16-172.31.17.186:22-147.75.109.163:48368.service - OpenSSH per-connection server daemon (147.75.109.163:48368). Sep 12 23:55:21.978453 sshd[4905]: Accepted publickey for core from 147.75.109.163 port 48368 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:21.981480 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:21.990462 systemd-logind[1993]: New session 17 of user core. Sep 12 23:55:21.994041 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:55:22.233309 sshd[4905]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:22.239413 systemd[1]: sshd@16-172.31.17.186:22-147.75.109.163:48368.service: Deactivated successfully. Sep 12 23:55:22.246091 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:55:22.249157 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:55:22.251799 systemd-logind[1993]: Removed session 17. Sep 12 23:55:27.275326 systemd[1]: Started sshd@17-172.31.17.186:22-147.75.109.163:48376.service - OpenSSH per-connection server daemon (147.75.109.163:48376). Sep 12 23:55:27.456451 sshd[4919]: Accepted publickey for core from 147.75.109.163 port 48376 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:27.460032 sshd[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:27.468539 systemd-logind[1993]: New session 18 of user core. Sep 12 23:55:27.477075 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:55:27.719106 sshd[4919]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:27.724441 systemd[1]: sshd@17-172.31.17.186:22-147.75.109.163:48376.service: Deactivated successfully. Sep 12 23:55:27.727520 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:55:27.731838 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:55:27.734002 systemd-logind[1993]: Removed session 18. Sep 12 23:55:27.756248 systemd[1]: Started sshd@18-172.31.17.186:22-147.75.109.163:48384.service - OpenSSH per-connection server daemon (147.75.109.163:48384). Sep 12 23:55:27.936227 sshd[4932]: Accepted publickey for core from 147.75.109.163 port 48384 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:27.939028 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:27.947818 systemd-logind[1993]: New session 19 of user core. Sep 12 23:55:27.958006 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:55:28.292654 sshd[4932]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:28.299320 systemd[1]: sshd@18-172.31.17.186:22-147.75.109.163:48384.service: Deactivated successfully. Sep 12 23:55:28.302865 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:55:28.304828 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:55:28.307380 systemd-logind[1993]: Removed session 19. Sep 12 23:55:28.333282 systemd[1]: Started sshd@19-172.31.17.186:22-147.75.109.163:48388.service - OpenSSH per-connection server daemon (147.75.109.163:48388). Sep 12 23:55:28.511461 sshd[4942]: Accepted publickey for core from 147.75.109.163 port 48388 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:28.514190 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:28.524864 systemd-logind[1993]: New session 20 of user core. Sep 12 23:55:28.531046 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:55:31.010855 sshd[4942]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:31.019575 systemd[1]: sshd@19-172.31.17.186:22-147.75.109.163:48388.service: Deactivated successfully. Sep 12 23:55:31.031301 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:55:31.035618 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:55:31.060040 systemd[1]: Started sshd@20-172.31.17.186:22-147.75.109.163:36184.service - OpenSSH per-connection server daemon (147.75.109.163:36184). Sep 12 23:55:31.064592 systemd-logind[1993]: Removed session 20. Sep 12 23:55:31.249868 sshd[4958]: Accepted publickey for core from 147.75.109.163 port 36184 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:31.252841 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:31.260509 systemd-logind[1993]: New session 21 of user core. Sep 12 23:55:31.274052 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:55:31.773997 sshd[4958]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:31.781382 systemd[1]: sshd@20-172.31.17.186:22-147.75.109.163:36184.service: Deactivated successfully. Sep 12 23:55:31.784857 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:55:31.789020 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:55:31.791104 systemd-logind[1993]: Removed session 21. Sep 12 23:55:31.813320 systemd[1]: Started sshd@21-172.31.17.186:22-147.75.109.163:36190.service - OpenSSH per-connection server daemon (147.75.109.163:36190). Sep 12 23:55:31.987673 sshd[4971]: Accepted publickey for core from 147.75.109.163 port 36190 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:31.990823 sshd[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:32.003225 systemd-logind[1993]: New session 22 of user core. Sep 12 23:55:32.014127 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:55:32.253251 sshd[4971]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:32.261374 systemd[1]: sshd@21-172.31.17.186:22-147.75.109.163:36190.service: Deactivated successfully. Sep 12 23:55:32.267825 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:55:32.269270 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:55:32.271340 systemd-logind[1993]: Removed session 22. Sep 12 23:55:37.293955 systemd[1]: Started sshd@22-172.31.17.186:22-147.75.109.163:36202.service - OpenSSH per-connection server daemon (147.75.109.163:36202). Sep 12 23:55:37.466268 sshd[4984]: Accepted publickey for core from 147.75.109.163 port 36202 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:37.469077 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:37.477028 systemd-logind[1993]: New session 23 of user core. Sep 12 23:55:37.490087 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:55:37.728236 sshd[4984]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:37.734566 systemd[1]: sshd@22-172.31.17.186:22-147.75.109.163:36202.service: Deactivated successfully. Sep 12 23:55:37.739486 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:55:37.742093 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:55:37.744442 systemd-logind[1993]: Removed session 23. Sep 12 23:55:42.768366 systemd[1]: Started sshd@23-172.31.17.186:22-147.75.109.163:41360.service - OpenSSH per-connection server daemon (147.75.109.163:41360). Sep 12 23:55:42.938600 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 41360 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:42.941474 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:42.949483 systemd-logind[1993]: New session 24 of user core. Sep 12 23:55:42.958031 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:55:43.193699 sshd[5000]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:43.200129 systemd[1]: sshd@23-172.31.17.186:22-147.75.109.163:41360.service: Deactivated successfully. Sep 12 23:55:43.203714 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:55:43.205469 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:55:43.210049 systemd-logind[1993]: Removed session 24. Sep 12 23:55:48.242247 systemd[1]: Started sshd@24-172.31.17.186:22-147.75.109.163:41372.service - OpenSSH per-connection server daemon (147.75.109.163:41372). Sep 12 23:55:48.417017 sshd[5014]: Accepted publickey for core from 147.75.109.163 port 41372 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:48.419825 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:48.429534 systemd-logind[1993]: New session 25 of user core. Sep 12 23:55:48.438010 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 23:55:48.677507 sshd[5014]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:48.685421 systemd[1]: sshd@24-172.31.17.186:22-147.75.109.163:41372.service: Deactivated successfully. Sep 12 23:55:48.690120 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 23:55:48.692530 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Sep 12 23:55:48.694570 systemd-logind[1993]: Removed session 25. Sep 12 23:55:53.718264 systemd[1]: Started sshd@25-172.31.17.186:22-147.75.109.163:53896.service - OpenSSH per-connection server daemon (147.75.109.163:53896). Sep 12 23:55:53.892884 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 53896 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:53.895556 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:53.904589 systemd-logind[1993]: New session 26 of user core. Sep 12 23:55:53.910040 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 23:55:54.153774 sshd[5027]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:54.159609 systemd[1]: sshd@25-172.31.17.186:22-147.75.109.163:53896.service: Deactivated successfully. Sep 12 23:55:54.160115 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Sep 12 23:55:54.164947 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 23:55:54.171082 systemd-logind[1993]: Removed session 26. Sep 12 23:55:54.195365 systemd[1]: Started sshd@26-172.31.17.186:22-147.75.109.163:53904.service - OpenSSH per-connection server daemon (147.75.109.163:53904). Sep 12 23:55:54.376707 sshd[5040]: Accepted publickey for core from 147.75.109.163 port 53904 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:54.379364 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:54.393815 systemd-logind[1993]: New session 27 of user core. Sep 12 23:55:54.399326 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 23:55:56.788937 containerd[2019]: time="2025-09-12T23:55:56.788642111Z" level=info msg="StopContainer for \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\" with timeout 30 (s)" Sep 12 23:55:56.790211 containerd[2019]: time="2025-09-12T23:55:56.789884339Z" level=info msg="Stop container \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\" with signal terminated" Sep 12 23:55:56.836610 systemd[1]: cri-containerd-bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1.scope: Deactivated successfully. Sep 12 23:55:56.850914 containerd[2019]: time="2025-09-12T23:55:56.849819216Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:55:56.864713 containerd[2019]: time="2025-09-12T23:55:56.864620988Z" level=info msg="StopContainer for \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\" with timeout 2 (s)" Sep 12 23:55:56.865823 containerd[2019]: time="2025-09-12T23:55:56.865709376Z" level=info msg="Stop container \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\" with signal terminated" Sep 12 23:55:56.902058 systemd-networkd[1933]: lxc_health: Link DOWN Sep 12 23:55:56.902081 systemd-networkd[1933]: lxc_health: Lost carrier Sep 12 23:55:56.930426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1-rootfs.mount: Deactivated successfully. Sep 12 23:55:56.946710 systemd[1]: cri-containerd-45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd.scope: Deactivated successfully. Sep 12 23:55:56.947835 systemd[1]: cri-containerd-45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd.scope: Consumed 16.263s CPU time. Sep 12 23:55:56.958214 containerd[2019]: time="2025-09-12T23:55:56.958031484Z" level=info msg="shim disconnected" id=bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1 namespace=k8s.io Sep 12 23:55:56.958214 containerd[2019]: time="2025-09-12T23:55:56.958114824Z" level=warning msg="cleaning up after shim disconnected" id=bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1 namespace=k8s.io Sep 12 23:55:56.958214 containerd[2019]: time="2025-09-12T23:55:56.958136604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:55:57.004593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd-rootfs.mount: Deactivated successfully. Sep 12 23:55:57.008600 containerd[2019]: time="2025-09-12T23:55:57.008472249Z" level=info msg="shim disconnected" id=45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd namespace=k8s.io Sep 12 23:55:57.009248 containerd[2019]: time="2025-09-12T23:55:57.008614209Z" level=warning msg="cleaning up after shim disconnected" id=45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd namespace=k8s.io Sep 12 23:55:57.009248 containerd[2019]: time="2025-09-12T23:55:57.008640033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:55:57.016543 containerd[2019]: time="2025-09-12T23:55:57.016245585Z" level=info msg="StopContainer for \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\" returns successfully" Sep 12 23:55:57.018166 containerd[2019]: time="2025-09-12T23:55:57.017908161Z" level=info msg="StopPodSandbox for \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\"" Sep 12 23:55:57.018166 containerd[2019]: time="2025-09-12T23:55:57.017994141Z" level=info msg="Container to stop \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:55:57.022383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012-shm.mount: Deactivated successfully. Sep 12 23:55:57.040415 systemd[1]: cri-containerd-20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012.scope: Deactivated successfully. Sep 12 23:55:57.065942 containerd[2019]: time="2025-09-12T23:55:57.065559609Z" level=info msg="StopContainer for \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\" returns successfully" Sep 12 23:55:57.066831 containerd[2019]: time="2025-09-12T23:55:57.066771045Z" level=info msg="StopPodSandbox for \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\"" Sep 12 23:55:57.067745 containerd[2019]: time="2025-09-12T23:55:57.067490025Z" level=info msg="Container to stop \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:55:57.067745 containerd[2019]: time="2025-09-12T23:55:57.067668861Z" level=info msg="Container to stop \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:55:57.068550 containerd[2019]: time="2025-09-12T23:55:57.068152077Z" level=info msg="Container to stop \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:55:57.068550 containerd[2019]: time="2025-09-12T23:55:57.068361681Z" level=info msg="Container to stop \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:55:57.068550 containerd[2019]: time="2025-09-12T23:55:57.068391045Z" level=info msg="Container to stop \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:55:57.077335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9-shm.mount: Deactivated successfully. Sep 12 23:55:57.096946 systemd[1]: cri-containerd-ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9.scope: Deactivated successfully. Sep 12 23:55:57.112113 containerd[2019]: time="2025-09-12T23:55:57.111803661Z" level=info msg="shim disconnected" id=20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012 namespace=k8s.io Sep 12 23:55:57.112113 containerd[2019]: time="2025-09-12T23:55:57.111919209Z" level=warning msg="cleaning up after shim disconnected" id=20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012 namespace=k8s.io Sep 12 23:55:57.112113 containerd[2019]: time="2025-09-12T23:55:57.111947025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:55:57.145791 containerd[2019]: time="2025-09-12T23:55:57.145076061Z" level=warning msg="cleanup warnings time=\"2025-09-12T23:55:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 23:55:57.150230 containerd[2019]: time="2025-09-12T23:55:57.150154701Z" level=info msg="TearDown network for sandbox \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" successfully" Sep 12 23:55:57.150230 containerd[2019]: time="2025-09-12T23:55:57.150211881Z" level=info msg="StopPodSandbox for \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" returns successfully" Sep 12 23:55:57.160168 containerd[2019]: time="2025-09-12T23:55:57.160078905Z" level=info msg="shim disconnected" id=ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9 namespace=k8s.io Sep 12 23:55:57.160815 containerd[2019]: time="2025-09-12T23:55:57.160702449Z" level=warning msg="cleaning up after shim disconnected" id=ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9 namespace=k8s.io Sep 12 23:55:57.160815 containerd[2019]: time="2025-09-12T23:55:57.160793817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:55:57.201702 containerd[2019]: time="2025-09-12T23:55:57.201604738Z" level=info msg="TearDown network for sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" successfully" Sep 12 23:55:57.201702 containerd[2019]: time="2025-09-12T23:55:57.201669430Z" level=info msg="StopPodSandbox for \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" returns successfully" Sep 12 23:55:57.234309 kubelet[3422]: I0912 23:55:57.234219 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff6c9f6-7166-41d7-b927-5ca900ff8d56-cilium-config-path\") pod \"cff6c9f6-7166-41d7-b927-5ca900ff8d56\" (UID: \"cff6c9f6-7166-41d7-b927-5ca900ff8d56\") " Sep 12 23:55:57.234309 kubelet[3422]: I0912 23:55:57.234316 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j88bm\" (UniqueName: \"kubernetes.io/projected/cff6c9f6-7166-41d7-b927-5ca900ff8d56-kube-api-access-j88bm\") pod \"cff6c9f6-7166-41d7-b927-5ca900ff8d56\" (UID: \"cff6c9f6-7166-41d7-b927-5ca900ff8d56\") " Sep 12 23:55:57.243024 kubelet[3422]: I0912 23:55:57.242941 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff6c9f6-7166-41d7-b927-5ca900ff8d56-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cff6c9f6-7166-41d7-b927-5ca900ff8d56" (UID: "cff6c9f6-7166-41d7-b927-5ca900ff8d56"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 23:55:57.245096 kubelet[3422]: I0912 23:55:57.244758 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff6c9f6-7166-41d7-b927-5ca900ff8d56-kube-api-access-j88bm" (OuterVolumeSpecName: "kube-api-access-j88bm") pod "cff6c9f6-7166-41d7-b927-5ca900ff8d56" (UID: "cff6c9f6-7166-41d7-b927-5ca900ff8d56"). InnerVolumeSpecName "kube-api-access-j88bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 23:55:57.335263 kubelet[3422]: I0912 23:55:57.335051 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-lib-modules\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.335879 kubelet[3422]: I0912 23:55:57.335820 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-kernel\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.336319 kubelet[3422]: I0912 23:55:57.336287 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-hostproc\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.336523 kubelet[3422]: I0912 23:55:57.336495 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cni-path\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.336691 kubelet[3422]: I0912 23:55:57.336666 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-etc-cni-netd\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.336919 kubelet[3422]: I0912 23:55:57.336892 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-net\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.337065 kubelet[3422]: I0912 23:55:57.337042 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-run\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.337279 kubelet[3422]: I0912 23:55:57.337240 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a643442b-876b-43e6-b967-8499aa9605e8-clustermesh-secrets\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.337455 kubelet[3422]: I0912 23:55:57.337411 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-bpf-maps\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.337742 kubelet[3422]: I0912 23:55:57.337583 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-cgroup\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.337742 kubelet[3422]: I0912 23:55:57.337667 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a643442b-876b-43e6-b967-8499aa9605e8-cilium-config-path\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.338102 kubelet[3422]: I0912 23:55:57.337707 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-xtables-lock\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.338102 kubelet[3422]: I0912 23:55:57.337977 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-hubble-tls\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.338102 kubelet[3422]: I0912 23:55:57.338047 3422 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxx6\" (UniqueName: \"kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-kube-api-access-nqxx6\") pod \"a643442b-876b-43e6-b967-8499aa9605e8\" (UID: \"a643442b-876b-43e6-b967-8499aa9605e8\") " Sep 12 23:55:57.338977 kubelet[3422]: I0912 23:55:57.338243 3422 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff6c9f6-7166-41d7-b927-5ca900ff8d56-cilium-config-path\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.338977 kubelet[3422]: I0912 23:55:57.335637 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.338977 kubelet[3422]: I0912 23:55:57.338294 3422 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j88bm\" (UniqueName: \"kubernetes.io/projected/cff6c9f6-7166-41d7-b927-5ca900ff8d56-kube-api-access-j88bm\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.338977 kubelet[3422]: I0912 23:55:57.336193 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.338977 kubelet[3422]: I0912 23:55:57.338265 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.339470 kubelet[3422]: I0912 23:55:57.338369 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.339470 kubelet[3422]: I0912 23:55:57.338409 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.339470 kubelet[3422]: I0912 23:55:57.338449 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.339470 kubelet[3422]: I0912 23:55:57.338490 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.339470 kubelet[3422]: I0912 23:55:57.338528 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.341755 kubelet[3422]: I0912 23:55:57.341233 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.342746 kubelet[3422]: I0912 23:55:57.342331 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 23:55:57.354935 kubelet[3422]: I0912 23:55:57.354863 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 23:55:57.355420 kubelet[3422]: I0912 23:55:57.355340 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-kube-api-access-nqxx6" (OuterVolumeSpecName: "kube-api-access-nqxx6") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "kube-api-access-nqxx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 23:55:57.358460 kubelet[3422]: I0912 23:55:57.358331 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a643442b-876b-43e6-b967-8499aa9605e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 23:55:57.360141 kubelet[3422]: I0912 23:55:57.360056 3422 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a643442b-876b-43e6-b967-8499aa9605e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a643442b-876b-43e6-b967-8499aa9605e8" (UID: "a643442b-876b-43e6-b967-8499aa9605e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 23:55:57.438954 kubelet[3422]: I0912 23:55:57.438831 3422 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-xtables-lock\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.438954 kubelet[3422]: I0912 23:55:57.438881 3422 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-hubble-tls\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.438954 kubelet[3422]: I0912 23:55:57.438904 3422 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxx6\" (UniqueName: \"kubernetes.io/projected/a643442b-876b-43e6-b967-8499aa9605e8-kube-api-access-nqxx6\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.438954 kubelet[3422]: I0912 23:55:57.438926 3422 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-lib-modules\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.438954 kubelet[3422]: I0912 23:55:57.438954 3422 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-kernel\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.438977 3422 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-hostproc\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.438997 3422 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cni-path\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.439017 3422 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-etc-cni-netd\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.439038 3422 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-host-proc-sys-net\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.439058 3422 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a643442b-876b-43e6-b967-8499aa9605e8-clustermesh-secrets\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.439079 3422 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-run\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.439099 3422 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-bpf-maps\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439334 kubelet[3422]: I0912 23:55:57.439121 3422 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a643442b-876b-43e6-b967-8499aa9605e8-cilium-cgroup\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.439839 kubelet[3422]: I0912 23:55:57.439143 3422 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a643442b-876b-43e6-b967-8499aa9605e8-cilium-config-path\") on node \"ip-172-31-17-186\" DevicePath \"\"" Sep 12 23:55:57.543787 kubelet[3422]: I0912 23:55:57.543336 3422 scope.go:117] "RemoveContainer" containerID="bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1" Sep 12 23:55:57.550876 containerd[2019]: time="2025-09-12T23:55:57.550323599Z" level=info msg="RemoveContainer for \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\"" Sep 12 23:55:57.563837 containerd[2019]: time="2025-09-12T23:55:57.563541623Z" level=info msg="RemoveContainer for \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\" returns successfully" Sep 12 23:55:57.570267 systemd[1]: Removed slice kubepods-besteffort-podcff6c9f6_7166_41d7_b927_5ca900ff8d56.slice - libcontainer container kubepods-besteffort-podcff6c9f6_7166_41d7_b927_5ca900ff8d56.slice. Sep 12 23:55:57.571933 kubelet[3422]: I0912 23:55:57.571290 3422 scope.go:117] "RemoveContainer" containerID="bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1" Sep 12 23:55:57.580579 containerd[2019]: time="2025-09-12T23:55:57.579711035Z" level=error msg="ContainerStatus for \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\": not found" Sep 12 23:55:57.580767 kubelet[3422]: E0912 23:55:57.580646 3422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\": not found" containerID="bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1" Sep 12 23:55:57.582236 kubelet[3422]: I0912 23:55:57.580697 3422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1"} err="failed to get container status \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd297d172ff81b193b23887973d4eb6b65eea374ece33dad5fc88e9ce3de74d1\": not found" Sep 12 23:55:57.582236 kubelet[3422]: I0912 23:55:57.580932 3422 scope.go:117] "RemoveContainer" containerID="45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd" Sep 12 23:55:57.590508 systemd[1]: Removed slice kubepods-burstable-poda643442b_876b_43e6_b967_8499aa9605e8.slice - libcontainer container kubepods-burstable-poda643442b_876b_43e6_b967_8499aa9605e8.slice. Sep 12 23:55:57.590967 systemd[1]: kubepods-burstable-poda643442b_876b_43e6_b967_8499aa9605e8.slice: Consumed 16.436s CPU time. Sep 12 23:55:57.594160 containerd[2019]: time="2025-09-12T23:55:57.594064091Z" level=info msg="RemoveContainer for \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\"" Sep 12 23:55:57.603426 containerd[2019]: time="2025-09-12T23:55:57.603024336Z" level=info msg="RemoveContainer for \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\" returns successfully" Sep 12 23:55:57.604101 kubelet[3422]: I0912 23:55:57.603875 3422 scope.go:117] "RemoveContainer" containerID="53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19" Sep 12 23:55:57.610071 containerd[2019]: time="2025-09-12T23:55:57.610011000Z" level=info msg="RemoveContainer for \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\"" Sep 12 23:55:57.616574 containerd[2019]: time="2025-09-12T23:55:57.616453308Z" level=info msg="RemoveContainer for \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\" returns successfully" Sep 12 23:55:57.618250 kubelet[3422]: I0912 23:55:57.617976 3422 scope.go:117] "RemoveContainer" containerID="b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6" Sep 12 23:55:57.623010 containerd[2019]: time="2025-09-12T23:55:57.622674168Z" level=info msg="RemoveContainer for \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\"" Sep 12 23:55:57.631675 containerd[2019]: time="2025-09-12T23:55:57.631169904Z" level=info msg="RemoveContainer for \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\" returns successfully" Sep 12 23:55:57.631881 kubelet[3422]: I0912 23:55:57.631521 3422 scope.go:117] "RemoveContainer" containerID="98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf" Sep 12 23:55:57.635922 containerd[2019]: time="2025-09-12T23:55:57.635331456Z" level=info msg="RemoveContainer for \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\"" Sep 12 23:55:57.643470 containerd[2019]: time="2025-09-12T23:55:57.643271700Z" level=info msg="RemoveContainer for \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\" returns successfully" Sep 12 23:55:57.644406 kubelet[3422]: I0912 23:55:57.644068 3422 scope.go:117] "RemoveContainer" containerID="d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809" Sep 12 23:55:57.648766 containerd[2019]: time="2025-09-12T23:55:57.647940888Z" level=info msg="RemoveContainer for \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\"" Sep 12 23:55:57.656417 containerd[2019]: time="2025-09-12T23:55:57.656365668Z" level=info msg="RemoveContainer for \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\" returns successfully" Sep 12 23:55:57.657319 kubelet[3422]: I0912 23:55:57.656896 3422 scope.go:117] "RemoveContainer" containerID="45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd" Sep 12 23:55:57.657470 containerd[2019]: time="2025-09-12T23:55:57.657237324Z" level=error msg="ContainerStatus for \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\": not found" Sep 12 23:55:57.657907 kubelet[3422]: E0912 23:55:57.657693 3422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\": not found" containerID="45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd" Sep 12 23:55:57.657907 kubelet[3422]: I0912 23:55:57.657763 3422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd"} err="failed to get container status \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"45e4b9a62aaba427151b4060c79d08e6cd86d11a833f7dcfa6959811046d11bd\": not found" Sep 12 23:55:57.657907 kubelet[3422]: I0912 23:55:57.657802 3422 scope.go:117] "RemoveContainer" containerID="53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19" Sep 12 23:55:57.658389 containerd[2019]: time="2025-09-12T23:55:57.658340688Z" level=error msg="ContainerStatus for \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\": not found" Sep 12 23:55:57.659020 kubelet[3422]: E0912 23:55:57.658841 3422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\": not found" containerID="53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19" Sep 12 23:55:57.659020 kubelet[3422]: I0912 23:55:57.658885 3422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19"} err="failed to get container status \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\": rpc error: code = NotFound desc = an error occurred when try to find container \"53605085f5a1bbddcd648475b65d80a70092222adbe949bebf5b5947eec31b19\": not found" Sep 12 23:55:57.659020 kubelet[3422]: I0912 23:55:57.658917 3422 scope.go:117] "RemoveContainer" containerID="b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6" Sep 12 23:55:57.659738 containerd[2019]: time="2025-09-12T23:55:57.659434320Z" level=error msg="ContainerStatus for \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\": not found" Sep 12 23:55:57.660143 kubelet[3422]: E0912 23:55:57.659903 3422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\": not found" containerID="b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6" Sep 12 23:55:57.660143 kubelet[3422]: I0912 23:55:57.659958 3422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6"} err="failed to get container status \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4d1ef2fdf591dbf6098a797f284a60d9c02b20f5e116e8e551d4ee22e7901f6\": not found" Sep 12 23:55:57.660143 kubelet[3422]: I0912 23:55:57.660008 3422 scope.go:117] "RemoveContainer" containerID="98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf" Sep 12 23:55:57.660533 containerd[2019]: time="2025-09-12T23:55:57.660387684Z" level=error msg="ContainerStatus for \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\": not found" Sep 12 23:55:57.660743 kubelet[3422]: E0912 23:55:57.660661 3422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\": not found" containerID="98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf" Sep 12 23:55:57.660809 kubelet[3422]: I0912 23:55:57.660714 3422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf"} err="failed to get container status \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\": rpc error: code = NotFound desc = an error occurred when try to find container \"98d663eff4026ed23d69b636f3fe6a0e3988c4890238ed0862227905f1a9beaf\": not found" Sep 12 23:55:57.660809 kubelet[3422]: I0912 23:55:57.660780 3422 scope.go:117] "RemoveContainer" containerID="d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809" Sep 12 23:55:57.661282 containerd[2019]: time="2025-09-12T23:55:57.661101168Z" level=error msg="ContainerStatus for \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\": not found" Sep 12 23:55:57.661411 kubelet[3422]: E0912 23:55:57.661362 3422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\": not found" containerID="d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809" Sep 12 23:55:57.661486 kubelet[3422]: I0912 23:55:57.661436 3422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809"} err="failed to get container status \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3d74aec00b6de671e9d50a10819777615911e3e67ce5dcdac0ee78b664dc809\": not found" Sep 12 23:55:57.805206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012-rootfs.mount: Deactivated successfully. Sep 12 23:55:57.805384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9-rootfs.mount: Deactivated successfully. Sep 12 23:55:57.805524 systemd[1]: var-lib-kubelet-pods-a643442b\x2d876b\x2d43e6\x2db967\x2d8499aa9605e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqxx6.mount: Deactivated successfully. Sep 12 23:55:57.805663 systemd[1]: var-lib-kubelet-pods-cff6c9f6\x2d7166\x2d41d7\x2db927\x2d5ca900ff8d56-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj88bm.mount: Deactivated successfully. Sep 12 23:55:57.805857 systemd[1]: var-lib-kubelet-pods-a643442b\x2d876b\x2d43e6\x2db967\x2d8499aa9605e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 23:55:57.806033 systemd[1]: var-lib-kubelet-pods-a643442b\x2d876b\x2d43e6\x2db967\x2d8499aa9605e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 23:55:58.043569 kubelet[3422]: I0912 23:55:58.043498 3422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a643442b-876b-43e6-b967-8499aa9605e8" path="/var/lib/kubelet/pods/a643442b-876b-43e6-b967-8499aa9605e8/volumes" Sep 12 23:55:58.045360 kubelet[3422]: I0912 23:55:58.045297 3422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff6c9f6-7166-41d7-b927-5ca900ff8d56" path="/var/lib/kubelet/pods/cff6c9f6-7166-41d7-b927-5ca900ff8d56/volumes" Sep 12 23:55:58.243972 kubelet[3422]: E0912 23:55:58.243876 3422 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 23:55:58.704534 sshd[5040]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:58.712232 systemd[1]: sshd@26-172.31.17.186:22-147.75.109.163:53904.service: Deactivated successfully. Sep 12 23:55:58.716787 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 23:55:58.717529 systemd[1]: session-27.scope: Consumed 1.622s CPU time. Sep 12 23:55:58.718752 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Sep 12 23:55:58.722172 systemd-logind[1993]: Removed session 27. Sep 12 23:55:58.747332 systemd[1]: Started sshd@27-172.31.17.186:22-147.75.109.163:53908.service - OpenSSH per-connection server daemon (147.75.109.163:53908). Sep 12 23:55:58.940683 sshd[5198]: Accepted publickey for core from 147.75.109.163 port 53908 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:58.943505 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:58.950878 systemd-logind[1993]: New session 28 of user core. Sep 12 23:55:58.961008 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 23:55:59.789026 ntpd[1987]: Deleting interface #11 lxc_health, fe80::8899:bdff:fea4:f208%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Sep 12 23:55:59.789553 ntpd[1987]: 12 Sep 23:55:59 ntpd[1987]: Deleting interface #11 lxc_health, fe80::8899:bdff:fea4:f208%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Sep 12 23:56:00.895130 kubelet[3422]: I0912 23:56:00.895021 3422 setters.go:600] "Node became not ready" node="ip-172-31-17-186" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T23:56:00Z","lastTransitionTime":"2025-09-12T23:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 23:56:00.916557 sshd[5198]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:00.932167 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Sep 12 23:56:00.934014 systemd[1]: sshd@27-172.31.17.186:22-147.75.109.163:53908.service: Deactivated successfully. Sep 12 23:56:00.948171 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 23:56:00.949111 systemd[1]: session-28.scope: Consumed 1.737s CPU time. Sep 12 23:56:00.969068 systemd-logind[1993]: Removed session 28. Sep 12 23:56:00.976427 systemd[1]: Started sshd@28-172.31.17.186:22-147.75.109.163:37734.service - OpenSSH per-connection server daemon (147.75.109.163:37734). Sep 12 23:56:01.139825 kubelet[3422]: E0912 23:56:01.139756 3422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a643442b-876b-43e6-b967-8499aa9605e8" containerName="apply-sysctl-overwrites" Sep 12 23:56:01.139825 kubelet[3422]: E0912 23:56:01.139817 3422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a643442b-876b-43e6-b967-8499aa9605e8" containerName="mount-bpf-fs" Sep 12 23:56:01.139825 kubelet[3422]: E0912 23:56:01.139836 3422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cff6c9f6-7166-41d7-b927-5ca900ff8d56" containerName="cilium-operator" Sep 12 23:56:01.140114 kubelet[3422]: E0912 23:56:01.139853 3422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a643442b-876b-43e6-b967-8499aa9605e8" containerName="clean-cilium-state" Sep 12 23:56:01.140114 kubelet[3422]: E0912 23:56:01.139870 3422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a643442b-876b-43e6-b967-8499aa9605e8" containerName="cilium-agent" Sep 12 23:56:01.140114 kubelet[3422]: E0912 23:56:01.139888 3422 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a643442b-876b-43e6-b967-8499aa9605e8" containerName="mount-cgroup" Sep 12 23:56:01.140114 kubelet[3422]: I0912 23:56:01.139944 3422 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff6c9f6-7166-41d7-b927-5ca900ff8d56" containerName="cilium-operator" Sep 12 23:56:01.140114 kubelet[3422]: I0912 23:56:01.139961 3422 memory_manager.go:354] "RemoveStaleState removing state" podUID="a643442b-876b-43e6-b967-8499aa9605e8" containerName="cilium-agent" Sep 12 23:56:01.164368 systemd[1]: Created slice kubepods-burstable-podb4c60ffd_9bf4_4c9d_aa04_a673ff02d9cb.slice - libcontainer container kubepods-burstable-podb4c60ffd_9bf4_4c9d_aa04_a673ff02d9cb.slice. Sep 12 23:56:01.172435 kubelet[3422]: W0912 23:56:01.172119 3422 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-17-186" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-186' and this object Sep 12 23:56:01.172435 kubelet[3422]: E0912 23:56:01.172199 3422 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-17-186\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-186' and this object" logger="UnhandledError" Sep 12 23:56:01.175769 kubelet[3422]: W0912 23:56:01.175146 3422 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-17-186" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-186' and this object Sep 12 23:56:01.175769 kubelet[3422]: E0912 23:56:01.175216 3422 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-17-186\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-186' and this object" logger="UnhandledError" Sep 12 23:56:01.179634 kubelet[3422]: W0912 23:56:01.179460 3422 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-17-186" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-186' and this object Sep 12 23:56:01.179634 kubelet[3422]: E0912 23:56:01.179526 3422 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-17-186\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-186' and this object" logger="UnhandledError" Sep 12 23:56:01.190861 sshd[5210]: Accepted publickey for core from 147.75.109.163 port 37734 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:01.193017 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:01.212694 systemd-logind[1993]: New session 29 of user core. Sep 12 23:56:01.219298 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 23:56:01.267237 kubelet[3422]: I0912 23:56:01.267108 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-cilium-config-path\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267237 kubelet[3422]: I0912 23:56:01.267199 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-hostproc\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267237 kubelet[3422]: I0912 23:56:01.267244 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-xtables-lock\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267610 kubelet[3422]: I0912 23:56:01.267283 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-lib-modules\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267610 kubelet[3422]: I0912 23:56:01.267320 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-cilium-ipsec-secrets\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267610 kubelet[3422]: I0912 23:56:01.267359 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-host-proc-sys-net\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267610 kubelet[3422]: I0912 23:56:01.267399 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-host-proc-sys-kernel\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267610 kubelet[3422]: I0912 23:56:01.267438 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-bpf-maps\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.267610 kubelet[3422]: I0912 23:56:01.267475 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-hubble-tls\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.268029 kubelet[3422]: I0912 23:56:01.267523 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-cilium-run\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.268029 kubelet[3422]: I0912 23:56:01.267565 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-cilium-cgroup\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.268029 kubelet[3422]: I0912 23:56:01.267623 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-cni-path\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.268029 kubelet[3422]: I0912 23:56:01.267686 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-etc-cni-netd\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.268029 kubelet[3422]: I0912 23:56:01.267753 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-clustermesh-secrets\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.268029 kubelet[3422]: I0912 23:56:01.267795 3422 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdgj\" (UniqueName: \"kubernetes.io/projected/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-kube-api-access-vxdgj\") pod \"cilium-tx6qx\" (UID: \"b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb\") " pod="kube-system/cilium-tx6qx" Sep 12 23:56:01.347205 sshd[5210]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:01.355836 systemd[1]: sshd@28-172.31.17.186:22-147.75.109.163:37734.service: Deactivated successfully. Sep 12 23:56:01.359949 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 23:56:01.362144 systemd-logind[1993]: Session 29 logged out. Waiting for processes to exit. Sep 12 23:56:01.365708 systemd-logind[1993]: Removed session 29. Sep 12 23:56:01.428579 systemd[1]: Started sshd@29-172.31.17.186:22-147.75.109.163:37746.service - OpenSSH per-connection server daemon (147.75.109.163:37746). Sep 12 23:56:01.605229 sshd[5220]: Accepted publickey for core from 147.75.109.163 port 37746 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:01.608972 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:01.619241 systemd-logind[1993]: New session 30 of user core. Sep 12 23:56:01.626081 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 23:56:02.370546 kubelet[3422]: E0912 23:56:02.370479 3422 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 12 23:56:02.371252 kubelet[3422]: E0912 23:56:02.370596 3422 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-cilium-config-path podName:b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb nodeName:}" failed. No retries permitted until 2025-09-12 23:56:02.870569615 +0000 UTC m=+115.166863862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb-cilium-config-path") pod "cilium-tx6qx" (UID: "b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb") : failed to sync configmap cache: timed out waiting for the condition Sep 12 23:56:02.978487 containerd[2019]: time="2025-09-12T23:56:02.978374274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tx6qx,Uid:b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb,Namespace:kube-system,Attempt:0,}" Sep 12 23:56:03.022528 containerd[2019]: time="2025-09-12T23:56:03.022040234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:03.022528 containerd[2019]: time="2025-09-12T23:56:03.022149242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:03.022528 containerd[2019]: time="2025-09-12T23:56:03.022196750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:03.022528 containerd[2019]: time="2025-09-12T23:56:03.022368614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:03.071101 systemd[1]: Started cri-containerd-ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211.scope - libcontainer container ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211. Sep 12 23:56:03.119526 containerd[2019]: time="2025-09-12T23:56:03.119467971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tx6qx,Uid:b4c60ffd-9bf4-4c9d-aa04-a673ff02d9cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\"" Sep 12 23:56:03.126463 containerd[2019]: time="2025-09-12T23:56:03.126084303Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:56:03.159416 containerd[2019]: time="2025-09-12T23:56:03.159306243Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453\"" Sep 12 23:56:03.161771 containerd[2019]: time="2025-09-12T23:56:03.160520727Z" level=info msg="StartContainer for \"96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453\"" Sep 12 23:56:03.220136 systemd[1]: Started cri-containerd-96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453.scope - libcontainer container 96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453. Sep 12 23:56:03.246175 kubelet[3422]: E0912 23:56:03.245972 3422 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 23:56:03.282030 containerd[2019]: time="2025-09-12T23:56:03.281946652Z" level=info msg="StartContainer for \"96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453\" returns successfully" Sep 12 23:56:03.300239 systemd[1]: cri-containerd-96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453.scope: Deactivated successfully. Sep 12 23:56:03.363430 containerd[2019]: time="2025-09-12T23:56:03.363037840Z" level=info msg="shim disconnected" id=96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453 namespace=k8s.io Sep 12 23:56:03.363430 containerd[2019]: time="2025-09-12T23:56:03.363119164Z" level=warning msg="cleaning up after shim disconnected" id=96165270fa89717353495fe2d5de9d25a32aef14e894cd4d611eaebee30ad453 namespace=k8s.io Sep 12 23:56:03.363430 containerd[2019]: time="2025-09-12T23:56:03.363158452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:03.593751 containerd[2019]: time="2025-09-12T23:56:03.593347565Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:56:03.619472 containerd[2019]: time="2025-09-12T23:56:03.618607733Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417\"" Sep 12 23:56:03.621909 containerd[2019]: time="2025-09-12T23:56:03.620923253Z" level=info msg="StartContainer for \"621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417\"" Sep 12 23:56:03.711096 systemd[1]: Started cri-containerd-621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417.scope - libcontainer container 621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417. Sep 12 23:56:03.773435 containerd[2019]: time="2025-09-12T23:56:03.773356386Z" level=info msg="StartContainer for \"621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417\" returns successfully" Sep 12 23:56:03.789548 systemd[1]: cri-containerd-621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417.scope: Deactivated successfully. Sep 12 23:56:03.839453 containerd[2019]: time="2025-09-12T23:56:03.839320218Z" level=info msg="shim disconnected" id=621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417 namespace=k8s.io Sep 12 23:56:03.839453 containerd[2019]: time="2025-09-12T23:56:03.839424774Z" level=warning msg="cleaning up after shim disconnected" id=621923da0540f5a57b6d48ca04bcccc2d6b988f99ceec002ed7cb65778444417 namespace=k8s.io Sep 12 23:56:03.839453 containerd[2019]: time="2025-09-12T23:56:03.839449530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:04.601467 containerd[2019]: time="2025-09-12T23:56:04.601371678Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:56:04.641680 containerd[2019]: time="2025-09-12T23:56:04.641589954Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a\"" Sep 12 23:56:04.644767 containerd[2019]: time="2025-09-12T23:56:04.642998238Z" level=info msg="StartContainer for \"541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a\"" Sep 12 23:56:04.709061 systemd[1]: Started cri-containerd-541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a.scope - libcontainer container 541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a. Sep 12 23:56:04.775197 containerd[2019]: time="2025-09-12T23:56:04.775088035Z" level=info msg="StartContainer for \"541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a\" returns successfully" Sep 12 23:56:04.780896 systemd[1]: cri-containerd-541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a.scope: Deactivated successfully. Sep 12 23:56:04.832141 containerd[2019]: time="2025-09-12T23:56:04.831976627Z" level=info msg="shim disconnected" id=541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a namespace=k8s.io Sep 12 23:56:04.832141 containerd[2019]: time="2025-09-12T23:56:04.832111279Z" level=warning msg="cleaning up after shim disconnected" id=541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a namespace=k8s.io Sep 12 23:56:04.832141 containerd[2019]: time="2025-09-12T23:56:04.832135063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:04.997314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-541c337aac6f4d95031e434bcb43747aed32d6a41ca15cd9a46c58af9778f13a-rootfs.mount: Deactivated successfully. Sep 12 23:56:05.606349 containerd[2019]: time="2025-09-12T23:56:05.606252211Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:56:05.636960 containerd[2019]: time="2025-09-12T23:56:05.636707887Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f\"" Sep 12 23:56:05.637771 containerd[2019]: time="2025-09-12T23:56:05.637533607Z" level=info msg="StartContainer for \"cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f\"" Sep 12 23:56:05.702068 systemd[1]: Started cri-containerd-cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f.scope - libcontainer container cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f. Sep 12 23:56:05.752224 systemd[1]: cri-containerd-cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f.scope: Deactivated successfully. Sep 12 23:56:05.760397 containerd[2019]: time="2025-09-12T23:56:05.760188452Z" level=info msg="StartContainer for \"cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f\" returns successfully" Sep 12 23:56:05.803400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f-rootfs.mount: Deactivated successfully. Sep 12 23:56:05.807327 containerd[2019]: time="2025-09-12T23:56:05.806911880Z" level=info msg="shim disconnected" id=cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f namespace=k8s.io Sep 12 23:56:05.807327 containerd[2019]: time="2025-09-12T23:56:05.806996912Z" level=warning msg="cleaning up after shim disconnected" id=cc662eb66472396cfab632f016513381617169217b8d5546ee0082641962b07f namespace=k8s.io Sep 12 23:56:05.807327 containerd[2019]: time="2025-09-12T23:56:05.807019880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:05.830442 containerd[2019]: time="2025-09-12T23:56:05.830306516Z" level=warning msg="cleanup warnings time=\"2025-09-12T23:56:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 23:56:06.620852 containerd[2019]: time="2025-09-12T23:56:06.619850732Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:56:06.664194 containerd[2019]: time="2025-09-12T23:56:06.664110561Z" level=info msg="CreateContainer within sandbox \"ce75d497c80d25c661940267c034d71c8d86b301875d0ba587f4f4a60b83d211\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fef6af3f4502ceffbe1acebafad7593363327a51d0543ade1e13d1031a75fdad\"" Sep 12 23:56:06.665808 containerd[2019]: time="2025-09-12T23:56:06.665142537Z" level=info msg="StartContainer for \"fef6af3f4502ceffbe1acebafad7593363327a51d0543ade1e13d1031a75fdad\"" Sep 12 23:56:06.732153 systemd[1]: Started cri-containerd-fef6af3f4502ceffbe1acebafad7593363327a51d0543ade1e13d1031a75fdad.scope - libcontainer container fef6af3f4502ceffbe1acebafad7593363327a51d0543ade1e13d1031a75fdad. Sep 12 23:56:06.797611 containerd[2019]: time="2025-09-12T23:56:06.797290437Z" level=info msg="StartContainer for \"fef6af3f4502ceffbe1acebafad7593363327a51d0543ade1e13d1031a75fdad\" returns successfully" Sep 12 23:56:07.648673 systemd[1]: run-containerd-runc-k8s.io-fef6af3f4502ceffbe1acebafad7593363327a51d0543ade1e13d1031a75fdad-runc.r0QRM4.mount: Deactivated successfully. Sep 12 23:56:07.784873 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 23:56:08.018904 containerd[2019]: time="2025-09-12T23:56:08.018539695Z" level=info msg="StopPodSandbox for \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\"" Sep 12 23:56:08.018904 containerd[2019]: time="2025-09-12T23:56:08.018751387Z" level=info msg="TearDown network for sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" successfully" Sep 12 23:56:08.018904 containerd[2019]: time="2025-09-12T23:56:08.018783199Z" level=info msg="StopPodSandbox for \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" returns successfully" Sep 12 23:56:08.022804 containerd[2019]: time="2025-09-12T23:56:08.021172531Z" level=info msg="RemovePodSandbox for \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\"" Sep 12 23:56:08.022804 containerd[2019]: time="2025-09-12T23:56:08.021264547Z" level=info msg="Forcibly stopping sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\"" Sep 12 23:56:08.022804 containerd[2019]: time="2025-09-12T23:56:08.021428695Z" level=info msg="TearDown network for sandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" successfully" Sep 12 23:56:08.031555 containerd[2019]: time="2025-09-12T23:56:08.031058239Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:08.031555 containerd[2019]: time="2025-09-12T23:56:08.031160455Z" level=info msg="RemovePodSandbox \"ea17e6ae90e38d8419d8861e9cbc44bf37ac5aaa205ae427e794eb8918284ff9\" returns successfully" Sep 12 23:56:08.034315 containerd[2019]: time="2025-09-12T23:56:08.034129423Z" level=info msg="StopPodSandbox for \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\"" Sep 12 23:56:08.034315 containerd[2019]: time="2025-09-12T23:56:08.034305739Z" level=info msg="TearDown network for sandbox \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" successfully" Sep 12 23:56:08.034315 containerd[2019]: time="2025-09-12T23:56:08.034340539Z" level=info msg="StopPodSandbox for \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" returns successfully" Sep 12 23:56:08.037011 containerd[2019]: time="2025-09-12T23:56:08.036236671Z" level=info msg="RemovePodSandbox for \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\"" Sep 12 23:56:08.037011 containerd[2019]: time="2025-09-12T23:56:08.036294871Z" level=info msg="Forcibly stopping sandbox \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\"" Sep 12 23:56:08.037011 containerd[2019]: time="2025-09-12T23:56:08.036408367Z" level=info msg="TearDown network for sandbox \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" successfully" Sep 12 23:56:08.049005 containerd[2019]: time="2025-09-12T23:56:08.048144151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:08.049005 containerd[2019]: time="2025-09-12T23:56:08.048230755Z" level=info msg="RemovePodSandbox \"20513a35a3c4efa1de106cbf06fe468826cc454e246d23f896987da74d917012\" returns successfully" Sep 12 23:56:12.456619 (udev-worker)[6055]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:12.458635 systemd-networkd[1933]: lxc_health: Link UP Sep 12 23:56:12.466416 (udev-worker)[6056]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:12.475986 systemd-networkd[1933]: lxc_health: Gained carrier Sep 12 23:56:13.024855 kubelet[3422]: I0912 23:56:13.024319 3422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tx6qx" podStartSLOduration=12.0242919 podStartE2EDuration="12.0242919s" podCreationTimestamp="2025-09-12 23:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:56:07.68841667 +0000 UTC m=+119.984710917" watchObservedRunningTime="2025-09-12 23:56:13.0242919 +0000 UTC m=+125.320586135" Sep 12 23:56:14.008980 systemd-networkd[1933]: lxc_health: Gained IPv6LL Sep 12 23:56:15.111403 systemd[1]: run-containerd-runc-k8s.io-fef6af3f4502ceffbe1acebafad7593363327a51d0543ade1e13d1031a75fdad-runc.G2di9m.mount: Deactivated successfully. Sep 12 23:56:16.789158 ntpd[1987]: Listen normally on 14 lxc_health [fe80::7c22:84ff:fe92:e50e%14]:123 Sep 12 23:56:16.789892 ntpd[1987]: 12 Sep 23:56:16 ntpd[1987]: Listen normally on 14 lxc_health [fe80::7c22:84ff:fe92:e50e%14]:123 Sep 12 23:56:17.525969 sshd[5220]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:17.533625 systemd[1]: sshd@29-172.31.17.186:22-147.75.109.163:37746.service: Deactivated successfully. Sep 12 23:56:17.542961 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 23:56:17.547193 systemd-logind[1993]: Session 30 logged out. Waiting for processes to exit. Sep 12 23:56:17.550703 systemd-logind[1993]: Removed session 30.