Sep 12 17:09:47.260478 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 17:09:47.260531 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:09:47.260558 kernel: KASLR disabled due to lack of seed Sep 12 17:09:47.260575 kernel: efi: EFI v2.7 by EDK II Sep 12 17:09:47.260591 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 17:09:47.260606 kernel: ACPI: Early table checksum verification disabled Sep 12 17:09:47.260624 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 17:09:47.260639 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:09:47.260655 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:09:47.260671 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 17:09:47.260692 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:09:47.260708 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 17:09:47.260723 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 17:09:47.260739 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 17:09:47.260758 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:09:47.260778 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 17:09:47.260796 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 17:09:47.260812 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 17:09:47.260829 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 17:09:47.260845 kernel: printk: bootconsole [uart0] enabled Sep 12 17:09:47.260862 kernel: NUMA: Failed to initialise from firmware Sep 12 17:09:47.260881 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:09:47.260898 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 17:09:47.260915 kernel: Zone ranges: Sep 12 17:09:47.260932 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:09:47.260948 kernel: DMA32 empty Sep 12 17:09:47.260969 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 17:09:47.260986 kernel: Movable zone start for each node Sep 12 17:09:47.261003 kernel: Early memory node ranges Sep 12 17:09:47.261019 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 17:09:47.261035 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 17:09:47.261051 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 17:09:47.261068 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 17:09:47.261084 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 17:09:47.261100 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 17:09:47.261117 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 17:09:47.261133 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 17:09:47.261149 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:09:47.261170 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 17:09:47.261187 kernel: psci: probing for conduit method from ACPI. Sep 12 17:09:47.261211 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 17:09:47.261228 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:09:47.261247 kernel: psci: Trusted OS migration not required Sep 12 17:09:47.261268 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:09:47.261286 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 17:09:47.261303 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:09:47.261321 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:09:47.262444 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:09:47.262476 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:09:47.262495 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:09:47.262513 kernel: CPU features: detected: Spectre-v2 Sep 12 17:09:47.262531 kernel: CPU features: detected: Spectre-v3a Sep 12 17:09:47.262549 kernel: CPU features: detected: Spectre-BHB Sep 12 17:09:47.262567 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 17:09:47.262595 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 17:09:47.262614 kernel: alternatives: applying boot alternatives Sep 12 17:09:47.262633 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:47.262654 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:09:47.262673 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:09:47.262693 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:09:47.262711 kernel: Fallback order for Node 0: 0 Sep 12 17:09:47.262730 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 17:09:47.262747 kernel: Policy zone: Normal Sep 12 17:09:47.262765 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:09:47.262784 kernel: software IO TLB: area num 2. Sep 12 17:09:47.262808 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 17:09:47.262826 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 17:09:47.262845 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:09:47.262862 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:09:47.262881 kernel: rcu: RCU event tracing is enabled. Sep 12 17:09:47.262900 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:09:47.262918 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:09:47.262936 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:09:47.262954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:09:47.262972 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:09:47.262989 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:09:47.263012 kernel: GICv3: 96 SPIs implemented Sep 12 17:09:47.263029 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:09:47.263047 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:09:47.263064 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 17:09:47.263082 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 17:09:47.263099 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 17:09:47.263117 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:09:47.263136 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:09:47.263153 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 17:09:47.263171 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 17:09:47.263188 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 17:09:47.263206 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:09:47.263228 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 17:09:47.263246 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 17:09:47.263264 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 17:09:47.263282 kernel: Console: colour dummy device 80x25 Sep 12 17:09:47.263300 kernel: printk: console [tty1] enabled Sep 12 17:09:47.263317 kernel: ACPI: Core revision 20230628 Sep 12 17:09:47.264379 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 17:09:47.264417 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:09:47.264436 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:09:47.264463 kernel: landlock: Up and running. Sep 12 17:09:47.264481 kernel: SELinux: Initializing. Sep 12 17:09:47.264499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:47.264517 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:47.264535 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:09:47.264554 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:09:47.264571 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:09:47.264590 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:09:47.264608 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 17:09:47.264630 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 17:09:47.264648 kernel: Remapping and enabling EFI services. Sep 12 17:09:47.264666 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:09:47.264683 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:09:47.264701 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 17:09:47.264719 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 17:09:47.264737 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 17:09:47.264755 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:09:47.264773 kernel: SMP: Total of 2 processors activated. Sep 12 17:09:47.264792 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:09:47.264816 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 17:09:47.264835 kernel: CPU features: detected: CRC32 instructions Sep 12 17:09:47.264866 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:09:47.264890 kernel: alternatives: applying system-wide alternatives Sep 12 17:09:47.264909 kernel: devtmpfs: initialized Sep 12 17:09:47.264928 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:09:47.264947 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:09:47.264966 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:09:47.264984 kernel: SMBIOS 3.0.0 present. Sep 12 17:09:47.265007 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 17:09:47.265026 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:09:47.265044 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:09:47.265063 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:09:47.265081 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:09:47.265100 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:09:47.265118 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Sep 12 17:09:47.265141 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:09:47.265160 kernel: cpuidle: using governor menu Sep 12 17:09:47.265179 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:09:47.265197 kernel: ASID allocator initialised with 65536 entries Sep 12 17:09:47.265216 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:09:47.265234 kernel: Serial: AMBA PL011 UART driver Sep 12 17:09:47.265252 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 17:09:47.265271 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:09:47.265290 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:09:47.265313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:09:47.267363 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:09:47.267400 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:09:47.267421 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:09:47.267442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:09:47.267462 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:09:47.267483 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:09:47.267503 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:09:47.267523 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:09:47.267551 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:09:47.267571 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:09:47.267591 kernel: ACPI: Interpreter enabled Sep 12 17:09:47.267611 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:09:47.267631 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:09:47.267651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 17:09:47.267994 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:09:47.268218 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:09:47.268501 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:09:47.268715 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 17:09:47.268920 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 17:09:47.268945 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 17:09:47.268965 kernel: acpiphp: Slot [1] registered Sep 12 17:09:47.268984 kernel: acpiphp: Slot [2] registered Sep 12 17:09:47.269002 kernel: acpiphp: Slot [3] registered Sep 12 17:09:47.269020 kernel: acpiphp: Slot [4] registered Sep 12 17:09:47.269046 kernel: acpiphp: Slot [5] registered Sep 12 17:09:47.269065 kernel: acpiphp: Slot [6] registered Sep 12 17:09:47.269083 kernel: acpiphp: Slot [7] registered Sep 12 17:09:47.269101 kernel: acpiphp: Slot [8] registered Sep 12 17:09:47.269120 kernel: acpiphp: Slot [9] registered Sep 12 17:09:47.269138 kernel: acpiphp: Slot [10] registered Sep 12 17:09:47.269157 kernel: acpiphp: Slot [11] registered Sep 12 17:09:47.269176 kernel: acpiphp: Slot [12] registered Sep 12 17:09:47.269194 kernel: acpiphp: Slot [13] registered Sep 12 17:09:47.269212 kernel: acpiphp: Slot [14] registered Sep 12 17:09:47.269236 kernel: acpiphp: Slot [15] registered Sep 12 17:09:47.269255 kernel: acpiphp: Slot [16] registered Sep 12 17:09:47.269273 kernel: acpiphp: Slot [17] registered Sep 12 17:09:47.269292 kernel: acpiphp: Slot [18] registered Sep 12 17:09:47.269310 kernel: acpiphp: Slot [19] registered Sep 12 17:09:47.269328 kernel: acpiphp: Slot [20] registered Sep 12 17:09:47.270429 kernel: acpiphp: Slot [21] registered Sep 12 17:09:47.270449 kernel: acpiphp: Slot [22] registered Sep 12 17:09:47.270468 kernel: acpiphp: Slot [23] registered Sep 12 17:09:47.270495 kernel: acpiphp: Slot [24] registered Sep 12 17:09:47.270515 kernel: acpiphp: Slot [25] registered Sep 12 17:09:47.270533 kernel: acpiphp: Slot [26] registered Sep 12 17:09:47.270551 kernel: acpiphp: Slot [27] registered Sep 12 17:09:47.270569 kernel: acpiphp: Slot [28] registered Sep 12 17:09:47.270587 kernel: acpiphp: Slot [29] registered Sep 12 17:09:47.270606 kernel: acpiphp: Slot [30] registered Sep 12 17:09:47.270624 kernel: acpiphp: Slot [31] registered Sep 12 17:09:47.270643 kernel: PCI host bridge to bus 0000:00 Sep 12 17:09:47.270910 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 17:09:47.271108 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:09:47.271303 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 17:09:47.275612 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 17:09:47.275931 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 17:09:47.276207 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 17:09:47.276552 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 17:09:47.276867 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:09:47.277110 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 17:09:47.279436 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:09:47.279778 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:09:47.280026 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 17:09:47.280262 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 17:09:47.280910 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 17:09:47.281203 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:09:47.281586 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 17:09:47.281934 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 17:09:47.282238 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 17:09:47.282600 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 17:09:47.282864 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 17:09:47.283072 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 17:09:47.283281 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:09:47.283543 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 17:09:47.283573 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:09:47.283594 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:09:47.283615 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:09:47.283635 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:09:47.283655 kernel: iommu: Default domain type: Translated Sep 12 17:09:47.283673 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:09:47.283702 kernel: efivars: Registered efivars operations Sep 12 17:09:47.283722 kernel: vgaarb: loaded Sep 12 17:09:47.283741 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:09:47.283761 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:09:47.283780 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:09:47.283800 kernel: pnp: PnP ACPI init Sep 12 17:09:47.284036 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 17:09:47.284065 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:09:47.284092 kernel: NET: Registered PF_INET protocol family Sep 12 17:09:47.284112 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:09:47.284131 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:09:47.284150 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:09:47.284168 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:09:47.284187 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:09:47.284207 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:09:47.284225 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:47.284244 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:47.284268 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:09:47.284287 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:09:47.284305 kernel: kvm [1]: HYP mode not available Sep 12 17:09:47.284324 kernel: Initialise system trusted keyrings Sep 12 17:09:47.284377 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:09:47.284397 kernel: Key type asymmetric registered Sep 12 17:09:47.284416 kernel: Asymmetric key parser 'x509' registered Sep 12 17:09:47.284434 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:09:47.284453 kernel: io scheduler mq-deadline registered Sep 12 17:09:47.284480 kernel: io scheduler kyber registered Sep 12 17:09:47.284499 kernel: io scheduler bfq registered Sep 12 17:09:47.284731 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 17:09:47.284758 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:09:47.284777 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:09:47.284796 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 17:09:47.284815 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 17:09:47.284833 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:09:47.284859 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:09:47.285107 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 17:09:47.285136 kernel: printk: console [ttyS0] disabled Sep 12 17:09:47.285156 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 17:09:47.285174 kernel: printk: console [ttyS0] enabled Sep 12 17:09:47.285193 kernel: printk: bootconsole [uart0] disabled Sep 12 17:09:47.285211 kernel: thunder_xcv, ver 1.0 Sep 12 17:09:47.285230 kernel: thunder_bgx, ver 1.0 Sep 12 17:09:47.285248 kernel: nicpf, ver 1.0 Sep 12 17:09:47.285274 kernel: nicvf, ver 1.0 Sep 12 17:09:47.285556 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:09:47.285764 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:09:46 UTC (1757696986) Sep 12 17:09:47.285790 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:09:47.285809 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 17:09:47.285828 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:09:47.285846 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:09:47.285865 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:09:47.285890 kernel: Segment Routing with IPv6 Sep 12 17:09:47.285909 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:09:47.285927 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:09:47.285946 kernel: Key type dns_resolver registered Sep 12 17:09:47.285984 kernel: registered taskstats version 1 Sep 12 17:09:47.286003 kernel: Loading compiled-in X.509 certificates Sep 12 17:09:47.286022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:09:47.286041 kernel: Key type .fscrypt registered Sep 12 17:09:47.286059 kernel: Key type fscrypt-provisioning registered Sep 12 17:09:47.286083 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:09:47.286102 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:09:47.286121 kernel: ima: No architecture policies found Sep 12 17:09:47.286140 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:09:47.286158 kernel: clk: Disabling unused clocks Sep 12 17:09:47.286176 kernel: Freeing unused kernel memory: 39488K Sep 12 17:09:47.286194 kernel: Run /init as init process Sep 12 17:09:47.286212 kernel: with arguments: Sep 12 17:09:47.286231 kernel: /init Sep 12 17:09:47.286249 kernel: with environment: Sep 12 17:09:47.286273 kernel: HOME=/ Sep 12 17:09:47.286292 kernel: TERM=linux Sep 12 17:09:47.286311 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:09:47.286411 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:47.286441 systemd[1]: Detected virtualization amazon. Sep 12 17:09:47.286462 systemd[1]: Detected architecture arm64. Sep 12 17:09:47.286482 systemd[1]: Running in initrd. Sep 12 17:09:47.286508 systemd[1]: No hostname configured, using default hostname. Sep 12 17:09:47.286528 systemd[1]: Hostname set to . Sep 12 17:09:47.286549 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:47.286569 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:09:47.286589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:47.286610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:47.286631 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:09:47.286652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:47.286678 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:09:47.286699 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:09:47.286723 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:09:47.286745 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:09:47.286765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:47.286786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:47.286807 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:09:47.286832 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:47.286853 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:47.286874 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:09:47.286895 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:47.286915 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:47.286935 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:09:47.286956 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:09:47.286976 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:47.286996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:47.287022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:47.287043 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:09:47.287063 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:09:47.287084 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:47.287104 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:09:47.287124 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:09:47.287144 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:47.287165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:47.287189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:47.287210 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:47.287230 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:47.287251 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:09:47.287313 systemd-journald[251]: Collecting audit messages is disabled. Sep 12 17:09:47.288813 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:47.288838 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:09:47.288860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:47.288891 systemd-journald[251]: Journal started Sep 12 17:09:47.288930 systemd-journald[251]: Runtime Journal (/run/log/journal/ec22c8fa2ed988b5b9e569f8713d7f32) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:09:47.249871 systemd-modules-load[252]: Inserted module 'overlay' Sep 12 17:09:47.294464 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:47.299129 kernel: Bridge firewalling registered Sep 12 17:09:47.296901 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 12 17:09:47.306917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:47.312824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:47.325584 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:47.333918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:47.342175 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:47.353688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:09:47.386432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:47.401174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:47.414684 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:09:47.417688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:47.425664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:47.436918 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:09:47.487046 dracut-cmdline[288]: dracut-dracut-053 Sep 12 17:09:47.498599 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:47.504586 systemd-resolved[285]: Positive Trust Anchors: Sep 12 17:09:47.504607 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:09:47.504668 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:09:47.669389 kernel: SCSI subsystem initialized Sep 12 17:09:47.677372 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:09:47.691429 kernel: iscsi: registered transport (tcp) Sep 12 17:09:47.717976 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:09:47.718098 kernel: QLogic iSCSI HBA Driver Sep 12 17:09:47.774375 kernel: random: crng init done Sep 12 17:09:47.774867 systemd-resolved[285]: Defaulting to hostname 'linux'. Sep 12 17:09:47.778853 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:09:47.783801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:47.812013 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:47.828944 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:09:47.862384 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:09:47.862463 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:09:47.864215 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:09:47.931384 kernel: raid6: neonx8 gen() 6752 MB/s Sep 12 17:09:47.948371 kernel: raid6: neonx4 gen() 6563 MB/s Sep 12 17:09:47.965376 kernel: raid6: neonx2 gen() 5441 MB/s Sep 12 17:09:47.982396 kernel: raid6: neonx1 gen() 3936 MB/s Sep 12 17:09:47.999371 kernel: raid6: int64x8 gen() 3812 MB/s Sep 12 17:09:48.016370 kernel: raid6: int64x4 gen() 3726 MB/s Sep 12 17:09:48.033368 kernel: raid6: int64x2 gen() 3621 MB/s Sep 12 17:09:48.051352 kernel: raid6: int64x1 gen() 2764 MB/s Sep 12 17:09:48.051404 kernel: raid6: using algorithm neonx8 gen() 6752 MB/s Sep 12 17:09:48.069318 kernel: raid6: .... xor() 4800 MB/s, rmw enabled Sep 12 17:09:48.069383 kernel: raid6: using neon recovery algorithm Sep 12 17:09:48.077374 kernel: xor: measuring software checksum speed Sep 12 17:09:48.078368 kernel: 8regs : 10245 MB/sec Sep 12 17:09:48.080757 kernel: 32regs : 11003 MB/sec Sep 12 17:09:48.080792 kernel: arm64_neon : 9544 MB/sec Sep 12 17:09:48.080817 kernel: xor: using function: 32regs (11003 MB/sec) Sep 12 17:09:48.166735 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:09:48.186227 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:48.198660 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:48.244242 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 12 17:09:48.254527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:48.268939 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:09:48.301136 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Sep 12 17:09:48.362203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:48.374712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:48.489776 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:48.507273 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:09:48.555681 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:48.567226 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:48.573259 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:48.578630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:48.590753 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:09:48.637264 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:48.711169 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:09:48.711243 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 17:09:48.720665 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:09:48.721120 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:09:48.721388 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:09:48.723977 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:09:48.730977 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:48.731721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:48.748403 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2a:cc:98:18:e7 Sep 12 17:09:48.749190 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:09:48.738938 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:48.742435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:48.742551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:48.745212 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:48.759088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:48.776640 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:09:48.776866 kernel: GPT:9289727 != 16777215 Sep 12 17:09:48.776897 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:09:48.776923 kernel: GPT:9289727 != 16777215 Sep 12 17:09:48.776948 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:09:48.776982 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:48.780217 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:09:48.805902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:48.819675 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:48.872608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:48.907368 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (530) Sep 12 17:09:48.914379 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (520) Sep 12 17:09:49.004992 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:09:49.026164 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:09:49.045874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:09:49.070661 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:09:49.078067 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:09:49.105772 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:09:49.117686 disk-uuid[660]: Primary Header is updated. Sep 12 17:09:49.117686 disk-uuid[660]: Secondary Entries is updated. Sep 12 17:09:49.117686 disk-uuid[660]: Secondary Header is updated. Sep 12 17:09:49.131365 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:49.140435 kernel: GPT:disk_guids don't match. Sep 12 17:09:49.140498 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:09:49.140524 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:49.150373 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:50.155401 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:50.156495 disk-uuid[661]: The operation has completed successfully. Sep 12 17:09:50.343193 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:09:50.344443 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:09:50.398607 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:09:50.408928 sh[1005]: Success Sep 12 17:09:50.428366 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:09:50.537321 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:09:50.542731 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:09:50.555545 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:09:50.597587 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:09:50.597674 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:50.597702 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:09:50.600895 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:09:50.600965 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:09:50.706373 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:09:50.741635 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:09:50.746192 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:09:50.758613 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:09:50.769671 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:09:50.793089 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:50.793161 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:50.794737 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:50.811870 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:50.828448 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:09:50.832021 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:50.842511 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:09:50.858780 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:09:50.956730 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:50.974768 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:09:51.034466 systemd-networkd[1197]: lo: Link UP Sep 12 17:09:51.034941 systemd-networkd[1197]: lo: Gained carrier Sep 12 17:09:51.038131 systemd-networkd[1197]: Enumeration completed Sep 12 17:09:51.039834 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:09:51.041948 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:51.041957 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:09:51.048295 systemd[1]: Reached target network.target - Network. Sep 12 17:09:51.055328 systemd-networkd[1197]: eth0: Link UP Sep 12 17:09:51.055360 systemd-networkd[1197]: eth0: Gained carrier Sep 12 17:09:51.055381 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:51.078457 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.21.20/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:09:51.273410 ignition[1113]: Ignition 2.19.0 Sep 12 17:09:51.274014 ignition[1113]: Stage: fetch-offline Sep 12 17:09:51.276123 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:51.276148 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:51.276757 ignition[1113]: Ignition finished successfully Sep 12 17:09:51.285008 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:51.297648 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:09:51.324849 ignition[1207]: Ignition 2.19.0 Sep 12 17:09:51.330263 ignition[1207]: Stage: fetch Sep 12 17:09:51.332434 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:51.332476 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:51.334487 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:51.369407 ignition[1207]: PUT result: OK Sep 12 17:09:51.374190 ignition[1207]: parsed url from cmdline: "" Sep 12 17:09:51.374359 ignition[1207]: no config URL provided Sep 12 17:09:51.374386 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:09:51.374416 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:09:51.374452 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:51.381124 ignition[1207]: PUT result: OK Sep 12 17:09:51.381210 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:09:51.388023 ignition[1207]: GET result: OK Sep 12 17:09:51.389713 ignition[1207]: parsing config with SHA512: f467768da07e3b87b18a6ea810a4885e472aa158a6eb101b30056a3a78fe4cb327d535a005c9c6d76e9517aac90f6e6b39e8269880a194492e58496d07a8d27b Sep 12 17:09:51.398767 unknown[1207]: fetched base config from "system" Sep 12 17:09:51.398795 unknown[1207]: fetched base config from "system" Sep 12 17:09:51.399545 ignition[1207]: fetch: fetch complete Sep 12 17:09:51.398810 unknown[1207]: fetched user config from "aws" Sep 12 17:09:51.399557 ignition[1207]: fetch: fetch passed Sep 12 17:09:51.407426 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:09:51.399645 ignition[1207]: Ignition finished successfully Sep 12 17:09:51.424669 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:09:51.451856 ignition[1213]: Ignition 2.19.0 Sep 12 17:09:51.451886 ignition[1213]: Stage: kargs Sep 12 17:09:51.453824 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:51.453852 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:51.454038 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:51.462129 ignition[1213]: PUT result: OK Sep 12 17:09:51.474274 ignition[1213]: kargs: kargs passed Sep 12 17:09:51.474639 ignition[1213]: Ignition finished successfully Sep 12 17:09:51.479943 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:09:51.489652 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:09:51.526567 ignition[1220]: Ignition 2.19.0 Sep 12 17:09:51.527106 ignition[1220]: Stage: disks Sep 12 17:09:51.527838 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:51.527865 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:51.528018 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:51.536874 ignition[1220]: PUT result: OK Sep 12 17:09:51.544721 ignition[1220]: disks: disks passed Sep 12 17:09:51.544882 ignition[1220]: Ignition finished successfully Sep 12 17:09:51.547975 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:09:51.553728 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:51.556586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:09:51.564121 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:51.566350 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:09:51.568689 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:09:51.582666 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:09:51.642292 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:09:51.650375 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:09:51.665712 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:09:51.755454 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:09:51.755596 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:09:51.759919 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:09:51.775526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:51.784599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:09:51.789500 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:09:51.789815 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:09:51.789867 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:51.818377 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1247) Sep 12 17:09:51.826087 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:51.826162 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:51.826551 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:09:51.838072 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:51.844130 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:09:51.854376 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:51.857273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:52.315549 systemd-networkd[1197]: eth0: Gained IPv6LL Sep 12 17:09:52.364771 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:09:52.399509 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:09:52.408373 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:09:52.417073 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:09:52.770665 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:52.780549 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:09:52.803555 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:09:52.817678 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:09:52.820468 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:52.855432 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:09:52.868359 ignition[1360]: INFO : Ignition 2.19.0 Sep 12 17:09:52.868359 ignition[1360]: INFO : Stage: mount Sep 12 17:09:52.871983 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:52.871983 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:52.876863 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:52.880702 ignition[1360]: INFO : PUT result: OK Sep 12 17:09:52.885195 ignition[1360]: INFO : mount: mount passed Sep 12 17:09:52.886904 ignition[1360]: INFO : Ignition finished successfully Sep 12 17:09:52.892651 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:09:52.904545 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:09:52.932515 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:52.957225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1372) Sep 12 17:09:52.957300 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:52.959039 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:52.960403 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:52.966399 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:52.969740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:53.011377 ignition[1389]: INFO : Ignition 2.19.0 Sep 12 17:09:53.011377 ignition[1389]: INFO : Stage: files Sep 12 17:09:53.011377 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:53.011377 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:53.011377 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:53.023754 ignition[1389]: INFO : PUT result: OK Sep 12 17:09:53.028740 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:09:53.035827 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:09:53.035827 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:09:53.086894 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:09:53.090086 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:09:53.090086 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:09:53.088291 unknown[1389]: wrote ssh authorized keys file for user: core Sep 12 17:09:53.098364 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:09:53.098364 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 17:09:53.192721 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:09:53.494953 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:09:53.494953 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:09:53.502908 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:09:53.727771 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:09:53.873982 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:53.877726 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:53.914172 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:53.914172 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:53.914172 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:53.914172 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:53.914172 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 17:09:54.136254 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:09:54.515318 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:09:54.515318 ignition[1389]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:54.522899 ignition[1389]: INFO : files: files passed Sep 12 17:09:54.522899 ignition[1389]: INFO : Ignition finished successfully Sep 12 17:09:54.555117 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:09:54.566708 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:09:54.583654 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:09:54.598756 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:09:54.600033 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:09:54.610387 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:54.610387 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:54.619575 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:54.626709 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:54.632624 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:09:54.646653 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:09:54.703540 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:09:54.705440 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:09:54.711864 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:09:54.720274 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:09:54.724999 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:09:54.734719 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:09:54.763815 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:54.776720 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:09:54.804368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:54.809705 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:54.815500 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:09:54.815945 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:09:54.816198 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:54.828866 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:09:54.831887 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:09:54.837988 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:09:54.840591 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:54.843807 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:54.853679 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:09:54.856697 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:54.864289 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:09:54.867215 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:09:54.871687 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:09:54.877406 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:09:54.879865 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:54.884980 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:54.891125 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:54.896638 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:09:54.899013 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:54.905701 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:09:54.905957 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:54.908875 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:09:54.909103 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:54.912703 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:09:54.912907 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:09:54.935522 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:09:54.942935 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:09:54.951753 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:09:54.952916 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:54.967295 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:09:54.967579 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:54.978462 ignition[1442]: INFO : Ignition 2.19.0 Sep 12 17:09:54.978462 ignition[1442]: INFO : Stage: umount Sep 12 17:09:54.986559 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:54.986559 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:54.986559 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:54.985767 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:09:55.004295 ignition[1442]: INFO : PUT result: OK Sep 12 17:09:54.988402 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:09:55.011150 ignition[1442]: INFO : umount: umount passed Sep 12 17:09:55.013793 ignition[1442]: INFO : Ignition finished successfully Sep 12 17:09:55.017383 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:09:55.021784 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:09:55.025877 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:09:55.025994 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:09:55.031455 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:09:55.031575 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:09:55.040897 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:09:55.041000 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:09:55.044162 systemd[1]: Stopped target network.target - Network. Sep 12 17:09:55.046663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:09:55.046777 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:55.064298 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:09:55.067454 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:09:55.070986 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:55.074156 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:09:55.087157 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:09:55.090279 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:09:55.090390 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:55.093022 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:09:55.093097 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:55.099416 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:09:55.099528 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:09:55.102536 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:09:55.102634 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:55.105795 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:09:55.112451 systemd-networkd[1197]: eth0: DHCPv6 lease lost Sep 12 17:09:55.114505 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:09:55.125691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:09:55.127259 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:09:55.127927 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:09:55.136027 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:09:55.136932 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:09:55.149787 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:09:55.150026 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:09:55.169270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:09:55.169483 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:55.172066 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:09:55.172158 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:55.190225 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:09:55.194563 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:09:55.196996 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:55.202811 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:09:55.202920 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:55.205829 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:09:55.205937 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:55.219092 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:09:55.219208 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:55.222119 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:55.246232 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:09:55.248571 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:55.257926 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:09:55.258053 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:55.262821 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:09:55.263033 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:55.267302 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:09:55.267450 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:55.270045 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:09:55.270150 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:55.279128 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:55.279246 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:55.299590 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:09:55.304372 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:09:55.304512 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:55.312772 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:09:55.312893 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:55.315848 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:09:55.315973 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:55.322230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:55.322358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:55.325857 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:09:55.326086 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:09:55.338155 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:09:55.338385 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:09:55.342867 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:09:55.367658 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:09:55.387447 systemd[1]: Switching root. Sep 12 17:09:55.449572 systemd-journald[251]: Journal stopped Sep 12 17:09:58.333521 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 12 17:09:58.333649 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:09:58.333694 kernel: SELinux: policy capability open_perms=1 Sep 12 17:09:58.333727 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:09:58.333763 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:09:58.333793 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:09:58.334269 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:09:58.343505 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:09:58.343556 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:09:58.343588 kernel: audit: type=1403 audit(1757696996.113:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:09:58.343644 systemd[1]: Successfully loaded SELinux policy in 90.829ms. Sep 12 17:09:58.343697 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.989ms. Sep 12 17:09:58.343732 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:58.343774 systemd[1]: Detected virtualization amazon. Sep 12 17:09:58.343807 systemd[1]: Detected architecture arm64. Sep 12 17:09:58.343839 systemd[1]: Detected first boot. Sep 12 17:09:58.343871 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:58.343905 zram_generator::config[1484]: No configuration found. Sep 12 17:09:58.343941 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:09:58.343974 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:09:58.344006 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:09:58.344044 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:09:58.344078 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:09:58.344111 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:09:58.344153 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:09:58.344185 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:09:58.344215 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:09:58.344249 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:09:58.344280 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:09:58.344309 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:09:58.344446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:58.344484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:58.344515 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:09:58.344548 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:09:58.344578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:09:58.344612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:58.344643 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:09:58.344684 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:58.344716 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:09:58.344753 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:09:58.344784 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:09:58.344815 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:09:58.344846 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:58.344877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:58.344909 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:58.344940 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:58.344970 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:09:58.345005 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:09:58.345034 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:58.345064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:58.345096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:58.345126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:09:58.345155 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:09:58.345188 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:09:58.345222 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:09:58.345251 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:09:58.345285 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:09:58.345317 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:09:58.347877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:09:58.348430 systemd[1]: Reached target machines.target - Containers. Sep 12 17:09:58.348629 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:09:58.348967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:58.349272 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:58.349311 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:09:58.353416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:58.353465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:09:58.353496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:58.353527 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:09:58.353557 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:58.353591 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:09:58.353621 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:09:58.353651 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:09:58.353685 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:09:58.353717 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:09:58.353746 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:58.353778 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:58.353811 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:09:58.353841 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:09:58.353872 kernel: fuse: init (API version 7.39) Sep 12 17:09:58.353917 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:58.353954 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:09:58.353987 systemd[1]: Stopped verity-setup.service. Sep 12 17:09:58.354022 kernel: loop: module loaded Sep 12 17:09:58.354053 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:09:58.354083 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:09:58.354112 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:09:58.354141 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:09:58.354171 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:09:58.354200 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:09:58.354233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:58.354263 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:09:58.354293 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:09:58.354322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:58.354374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:58.354407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:58.354494 systemd-journald[1562]: Collecting audit messages is disabled. Sep 12 17:09:58.354547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:58.354578 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:09:58.354608 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:09:58.354642 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:58.354674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:58.354709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:58.354742 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:09:58.354772 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:09:58.354801 systemd-journald[1562]: Journal started Sep 12 17:09:58.354848 systemd-journald[1562]: Runtime Journal (/run/log/journal/ec22c8fa2ed988b5b9e569f8713d7f32) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:09:57.641733 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:09:57.747571 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:09:57.748425 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:09:58.369692 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:58.385546 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:09:58.397808 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:09:58.405606 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:09:58.408516 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:09:58.408581 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:58.418710 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:09:58.440178 kernel: ACPI: bus type drm_connector registered Sep 12 17:09:58.431638 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:09:58.439914 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:09:58.442646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:58.467743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:09:58.480770 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:09:58.483489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:09:58.488691 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:09:58.491430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:09:58.493800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:58.500581 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:09:58.508745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:58.518441 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:09:58.521720 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:09:58.522812 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:09:58.525876 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:09:58.530806 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:09:58.535826 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:09:58.606458 systemd-journald[1562]: Time spent on flushing to /var/log/journal/ec22c8fa2ed988b5b9e569f8713d7f32 is 76.819ms for 914 entries. Sep 12 17:09:58.606458 systemd-journald[1562]: System Journal (/var/log/journal/ec22c8fa2ed988b5b9e569f8713d7f32) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:09:58.708920 systemd-journald[1562]: Received client request to flush runtime journal. Sep 12 17:09:58.709019 kernel: loop0: detected capacity change from 0 to 114328 Sep 12 17:09:58.641403 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:09:58.644396 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:09:58.657910 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:09:58.714476 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:09:58.729249 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:58.749104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:58.753640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:09:58.755524 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:09:58.771752 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:09:58.780379 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:09:58.781394 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Sep 12 17:09:58.781425 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Sep 12 17:09:58.795984 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:58.812672 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:09:58.820395 kernel: loop1: detected capacity change from 0 to 52536 Sep 12 17:09:58.847173 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:09:58.889928 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:09:58.904688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:58.952390 kernel: loop2: detected capacity change from 0 to 207008 Sep 12 17:09:58.982320 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 12 17:09:58.983112 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 12 17:09:58.993293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:59.247565 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 17:09:59.366378 kernel: loop4: detected capacity change from 0 to 114328 Sep 12 17:09:59.389387 kernel: loop5: detected capacity change from 0 to 52536 Sep 12 17:09:59.409384 kernel: loop6: detected capacity change from 0 to 207008 Sep 12 17:09:59.448441 kernel: loop7: detected capacity change from 0 to 114432 Sep 12 17:09:59.459356 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:09:59.461730 (sd-merge)[1641]: Merged extensions into '/usr'. Sep 12 17:09:59.472541 systemd[1]: Reloading requested from client PID 1611 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:09:59.472569 systemd[1]: Reloading... Sep 12 17:09:59.607437 zram_generator::config[1663]: No configuration found. Sep 12 17:09:59.968149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:00.081236 systemd[1]: Reloading finished in 607 ms. Sep 12 17:10:00.102388 ldconfig[1606]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:10:00.121579 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:10:00.124725 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:10:00.128313 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:10:00.146672 systemd[1]: Starting ensure-sysext.service... Sep 12 17:10:00.156634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:10:00.170922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:10:00.183611 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:10:00.183657 systemd[1]: Reloading... Sep 12 17:10:00.241271 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:10:00.242017 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:10:00.245984 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:10:00.250711 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 17:10:00.250872 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 17:10:00.263679 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:10:00.263709 systemd-tmpfiles[1721]: Skipping /boot Sep 12 17:10:00.303872 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Sep 12 17:10:00.305918 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:10:00.305946 systemd-tmpfiles[1721]: Skipping /boot Sep 12 17:10:00.385222 zram_generator::config[1748]: No configuration found. Sep 12 17:10:00.578817 (udev-worker)[1763]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:10:00.775149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:00.918875 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1788) Sep 12 17:10:00.961640 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:10:00.961931 systemd[1]: Reloading finished in 777 ms. Sep 12 17:10:01.007567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:10:01.027444 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:10:01.093874 systemd[1]: Finished ensure-sysext.service. Sep 12 17:10:01.129244 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:10:01.142250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:10:01.154673 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:01.170707 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:10:01.178693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:10:01.181707 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:10:01.190903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:10:01.201675 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:10:01.209662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:10:01.223678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:10:01.229085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:10:01.234326 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:10:01.240385 lvm[1922]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:10:01.244708 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:10:01.257864 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:10:01.268847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:10:01.274357 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:10:01.286774 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:10:01.299732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:10:01.329735 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:10:01.339618 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:10:01.376242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:10:01.377015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:10:01.397486 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:10:01.406206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:10:01.406564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:10:01.414221 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:10:01.421923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:10:01.422795 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:10:01.423055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:10:01.446899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:10:01.458118 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:10:01.458304 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:10:01.458632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:10:01.458817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:10:01.470233 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:10:01.502105 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:10:01.513280 lvm[1952]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:10:01.513859 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:10:01.526950 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:10:01.535428 augenrules[1958]: No rules Sep 12 17:10:01.540444 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:01.577899 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:10:01.593715 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:10:01.603708 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:10:01.662182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:10:01.735433 systemd-networkd[1932]: lo: Link UP Sep 12 17:10:01.735972 systemd-networkd[1932]: lo: Gained carrier Sep 12 17:10:01.739183 systemd-networkd[1932]: Enumeration completed Sep 12 17:10:01.739696 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:10:01.742279 systemd-resolved[1933]: Positive Trust Anchors: Sep 12 17:10:01.742304 systemd-resolved[1933]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:10:01.742394 systemd-resolved[1933]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:10:01.746931 systemd-networkd[1932]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:01.746956 systemd-networkd[1932]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:10:01.754415 systemd-networkd[1932]: eth0: Link UP Sep 12 17:10:01.754584 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:10:01.761685 systemd-resolved[1933]: Defaulting to hostname 'linux'. Sep 12 17:10:01.764621 systemd-networkd[1932]: eth0: Gained carrier Sep 12 17:10:01.764677 systemd-networkd[1932]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:10:01.765629 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:10:01.768688 systemd[1]: Reached target network.target - Network. Sep 12 17:10:01.769587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:10:01.772227 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:10:01.776268 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:10:01.776806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:10:01.777433 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:10:01.780090 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:10:01.782454 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:10:01.782802 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:10:01.782849 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:10:01.783155 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:10:01.795008 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:10:01.803490 systemd-networkd[1932]: eth0: DHCPv4 address 172.31.21.20/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:10:01.803940 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:10:01.814789 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:10:01.818326 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:10:01.821011 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:10:01.823246 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:10:01.825405 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:10:01.825464 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:10:01.832538 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:10:01.839653 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:10:01.850855 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:10:01.857605 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:10:01.867698 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:10:01.870618 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:10:01.878775 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:10:01.886676 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:10:01.892529 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:10:01.898544 jq[1984]: false Sep 12 17:10:01.900226 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:10:01.913975 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:10:01.921782 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:10:01.946252 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:10:01.950324 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:10:01.953423 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:10:01.958727 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:10:01.965632 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:10:01.971280 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:10:01.971695 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:10:01.975399 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:10:01.976453 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:10:02.002377 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:10:01.994021 dbus-daemon[1983]: [system] SELinux support is enabled Sep 12 17:10:01.999092 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1932 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:10:02.016412 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:10:02.016533 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:10:02.019651 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:10:02.019705 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:10:02.040782 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:10:02.032649 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:10:02.055363 jq[1994]: true Sep 12 17:10:02.105055 (ntainerd)[2010]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:10:02.146380 jq[2009]: true Sep 12 17:10:02.155669 extend-filesystems[1985]: Found loop4 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found loop5 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found loop6 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found loop7 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1p1 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1p2 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1p3 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found usr Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1p4 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1p6 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1p7 Sep 12 17:10:02.155669 extend-filesystems[1985]: Found nvme0n1p9 Sep 12 17:10:02.155669 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Sep 12 17:10:02.237586 tar[1999]: linux-arm64/LICENSE Sep 12 17:10:02.237586 tar[1999]: linux-arm64/helm Sep 12 17:10:02.186843 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:10:02.238235 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Sep 12 17:10:02.187226 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:10:02.250771 extend-filesystems[2030]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: ---------------------------------------------------- Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 17:10:02.273240 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: ---------------------------------------------------- Sep 12 17:10:02.267932 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:10:02.290432 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:10:02.267982 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:10:02.268004 ntpd[1987]: ---------------------------------------------------- Sep 12 17:10:02.268023 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:10:02.268043 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:10:02.268061 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 12 17:10:02.268081 ntpd[1987]: available at https://www.nwtime.org/support Sep 12 17:10:02.268100 ntpd[1987]: ---------------------------------------------------- Sep 12 17:10:02.306411 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 12 17:10:02.310242 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 12 17:10:02.311516 ntpd[1987]: basedate set to 2025-08-31 Sep 12 17:10:02.318254 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: basedate set to 2025-08-31 Sep 12 17:10:02.318254 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 17:10:02.311555 ntpd[1987]: gps base set to 2025-08-31 (week 2382) Sep 12 17:10:02.334320 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:10:02.337186 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:10:02.337186 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:10:02.334437 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:10:02.337742 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:10:02.353543 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:10:02.353543 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Listen normally on 3 eth0 172.31.21.20:123 Sep 12 17:10:02.353543 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 17:10:02.353543 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: bind(21) AF_INET6 fe80::42a:ccff:fe98:18e7%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:10:02.353543 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: unable to create socket on eth0 (5) for fe80::42a:ccff:fe98:18e7%2#123 Sep 12 17:10:02.353543 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: failed to init interface for address fe80::42a:ccff:fe98:18e7%2 Sep 12 17:10:02.353543 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 17:10:02.345521 ntpd[1987]: Listen normally on 3 eth0 172.31.21.20:123 Sep 12 17:10:02.345592 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 12 17:10:02.345706 ntpd[1987]: bind(21) AF_INET6 fe80::42a:ccff:fe98:18e7%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:10:02.345747 ntpd[1987]: unable to create socket on eth0 (5) for fe80::42a:ccff:fe98:18e7%2#123 Sep 12 17:10:02.345777 ntpd[1987]: failed to init interface for address fe80::42a:ccff:fe98:18e7%2 Sep 12 17:10:02.345837 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 12 17:10:02.391317 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:10:02.400320 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1788) Sep 12 17:10:02.400469 update_engine[1993]: I20250912 17:10:02.396255 1993 main.cc:92] Flatcar Update Engine starting Sep 12 17:10:02.446632 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:02.446632 ntpd[1987]: 12 Sep 17:10:02 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:02.393457 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:02.446806 extend-filesystems[2030]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:10:02.446806 extend-filesystems[2030]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:10:02.446806 extend-filesystems[2030]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:10:02.407037 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:10:02.458730 update_engine[1993]: I20250912 17:10:02.410634 1993 update_check_scheduler.cc:74] Next update check in 10m57s Sep 12 17:10:02.393511 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:10:02.458897 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:10:02.475764 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:10:02.482297 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:10:02.482795 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:10:02.490928 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:10:02.627883 bash[2077]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.630 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.631 INFO Fetch successful Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.631 INFO Fetch successful Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.631 INFO Fetch successful Sep 12 17:10:02.631246 coreos-metadata[1982]: Sep 12 17:10:02.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:10:02.632133 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:10:02.647806 systemd[1]: Starting sshkeys.service... Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetch successful Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetch failed with 404: resource not found Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetch successful Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetch successful Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetch successful Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetch successful Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:10:02.662062 coreos-metadata[1982]: Sep 12 17:10:02.659 INFO Fetch successful Sep 12 17:10:02.729328 systemd-logind[1992]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:10:02.731452 systemd-logind[1992]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 17:10:02.738535 systemd-logind[1992]: New seat seat0. Sep 12 17:10:02.746592 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:10:02.761985 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:10:02.768925 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2004 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:10:02.790071 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:10:02.798093 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:10:02.819858 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:10:02.824993 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:10:02.840072 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:10:02.843095 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:10:02.850622 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:10:02.882319 polkitd[2111]: Started polkitd version 121 Sep 12 17:10:02.905971 polkitd[2111]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:10:02.906093 polkitd[2111]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:10:02.906970 polkitd[2111]: Finished loading, compiling and executing 2 rules Sep 12 17:10:02.923586 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:10:02.924026 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:10:02.926508 polkitd[2111]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:10:02.964404 containerd[2010]: time="2025-09-12T17:10:02.964212145Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:10:02.971601 systemd-hostnamed[2004]: Hostname set to (transient) Sep 12 17:10:02.971872 systemd-resolved[1933]: System hostname changed to 'ip-172-31-21-20'. Sep 12 17:10:03.069444 systemd-networkd[1932]: eth0: Gained IPv6LL Sep 12 17:10:03.086857 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:10:03.092176 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:10:03.140474 coreos-metadata[2108]: Sep 12 17:10:03.138 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:10:03.161896 coreos-metadata[2108]: Sep 12 17:10:03.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:10:03.161896 coreos-metadata[2108]: Sep 12 17:10:03.142 INFO Fetch successful Sep 12 17:10:03.161896 coreos-metadata[2108]: Sep 12 17:10:03.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:10:03.161896 coreos-metadata[2108]: Sep 12 17:10:03.146 INFO Fetch successful Sep 12 17:10:03.162185 containerd[2010]: time="2025-09-12T17:10:03.146771962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:03.162185 containerd[2010]: time="2025-09-12T17:10:03.161521786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:03.162185 containerd[2010]: time="2025-09-12T17:10:03.161593846Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:10:03.162185 containerd[2010]: time="2025-09-12T17:10:03.161629594Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:10:03.162185 containerd[2010]: time="2025-09-12T17:10:03.161989498Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:10:03.162185 containerd[2010]: time="2025-09-12T17:10:03.162029014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:03.162185 containerd[2010]: time="2025-09-12T17:10:03.162156046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:03.150157 unknown[2108]: wrote ssh authorized keys file for user: core Sep 12 17:10:03.163017 containerd[2010]: time="2025-09-12T17:10:03.162186586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:03.151444 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:10:03.163723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.163818790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.163932658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.169070194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.169123510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.170515966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.171025210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.174668470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.174731614Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.175003630Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:10:03.178071 containerd[2010]: time="2025-09-12T17:10:03.175125754Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:10:03.169519 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:10:03.193923 containerd[2010]: time="2025-09-12T17:10:03.193836214Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:10:03.198381 containerd[2010]: time="2025-09-12T17:10:03.194052118Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:10:03.198381 containerd[2010]: time="2025-09-12T17:10:03.194119102Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:10:03.198381 containerd[2010]: time="2025-09-12T17:10:03.194160514Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:10:03.198381 containerd[2010]: time="2025-09-12T17:10:03.194197246Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:10:03.198381 containerd[2010]: time="2025-09-12T17:10:03.194519170Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:10:03.214003 containerd[2010]: time="2025-09-12T17:10:03.213370355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.225922703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226007603Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226056299Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226102523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226145975Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226187903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226232387Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226274699Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226321631Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226392251Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226434227Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226491311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226535627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.227989 containerd[2010]: time="2025-09-12T17:10:03.226576979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226624511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226657919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226702895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226757327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226800047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226842335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226890251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226930787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.226965059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.227006591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.227053343Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.227116763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.227158799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.228706 containerd[2010]: time="2025-09-12T17:10:03.227202539Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:10:03.249378 containerd[2010]: time="2025-09-12T17:10:03.238006235Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:10:03.252386 containerd[2010]: time="2025-09-12T17:10:03.240216935Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:10:03.252386 containerd[2010]: time="2025-09-12T17:10:03.250576391Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:10:03.252386 containerd[2010]: time="2025-09-12T17:10:03.250628543Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:10:03.252386 containerd[2010]: time="2025-09-12T17:10:03.250657679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.252386 containerd[2010]: time="2025-09-12T17:10:03.250695959Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:10:03.252386 containerd[2010]: time="2025-09-12T17:10:03.250723631Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:10:03.252386 containerd[2010]: time="2025-09-12T17:10:03.250751531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:10:03.255387 containerd[2010]: time="2025-09-12T17:10:03.254687663Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:10:03.260368 containerd[2010]: time="2025-09-12T17:10:03.256803659Z" level=info msg="Connect containerd service" Sep 12 17:10:03.260368 containerd[2010]: time="2025-09-12T17:10:03.258771935Z" level=info msg="using legacy CRI server" Sep 12 17:10:03.260368 containerd[2010]: time="2025-09-12T17:10:03.258843167Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:10:03.260368 containerd[2010]: time="2025-09-12T17:10:03.259159295Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:10:03.266797 containerd[2010]: time="2025-09-12T17:10:03.266690843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:10:03.268992 containerd[2010]: time="2025-09-12T17:10:03.268428035Z" level=info msg="Start subscribing containerd event" Sep 12 17:10:03.268992 containerd[2010]: time="2025-09-12T17:10:03.268560239Z" level=info msg="Start recovering state" Sep 12 17:10:03.268992 containerd[2010]: time="2025-09-12T17:10:03.268705295Z" level=info msg="Start event monitor" Sep 12 17:10:03.268992 containerd[2010]: time="2025-09-12T17:10:03.268730099Z" level=info msg="Start snapshots syncer" Sep 12 17:10:03.268992 containerd[2010]: time="2025-09-12T17:10:03.268755203Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:10:03.268992 containerd[2010]: time="2025-09-12T17:10:03.268776611Z" level=info msg="Start streaming server" Sep 12 17:10:03.280697 containerd[2010]: time="2025-09-12T17:10:03.280613603Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:10:03.282806 containerd[2010]: time="2025-09-12T17:10:03.282735767Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:10:03.290384 update-ssh-keys[2175]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:10:03.295839 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:10:03.311458 systemd[1]: Finished sshkeys.service. Sep 12 17:10:03.314887 containerd[2010]: time="2025-09-12T17:10:03.313892915Z" level=info msg="containerd successfully booted in 0.358302s" Sep 12 17:10:03.314726 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:10:03.338464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:10:03.380974 amazon-ssm-agent[2165]: Initializing new seelog logger Sep 12 17:10:03.380974 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete Sep 12 17:10:03.380974 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.380974 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.381685 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 processing appconfig overrides Sep 12 17:10:03.382749 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.382749 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.382749 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 processing appconfig overrides Sep 12 17:10:03.382749 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.382749 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.382749 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 processing appconfig overrides Sep 12 17:10:03.385688 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO Proxy environment variables: Sep 12 17:10:03.394771 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.394771 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:10:03.394771 amazon-ssm-agent[2165]: 2025/09/12 17:10:03 processing appconfig overrides Sep 12 17:10:03.485691 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO https_proxy: Sep 12 17:10:03.504602 locksmithd[2060]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:10:03.585247 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO http_proxy: Sep 12 17:10:03.684215 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO no_proxy: Sep 12 17:10:03.791464 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:10:03.894361 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:10:03.992516 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO Agent will take identity from EC2 Sep 12 17:10:04.091778 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:04.193748 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:04.293688 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:04.388222 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:10:04.388222 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 17:10:04.388222 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:10:04.388912 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:10:04.388912 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [Registrar] Starting registrar module Sep 12 17:10:04.388912 amazon-ssm-agent[2165]: 2025-09-12 17:10:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:10:04.388912 amazon-ssm-agent[2165]: 2025-09-12 17:10:04 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:10:04.393383 amazon-ssm-agent[2165]: 2025-09-12 17:10:04 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:10:04.393383 amazon-ssm-agent[2165]: 2025-09-12 17:10:04 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:10:04.393383 amazon-ssm-agent[2165]: 2025-09-12 17:10:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:10:04.393383 amazon-ssm-agent[2165]: 2025-09-12 17:10:04 INFO [CredentialRefresher] Next credential rotation will be in 30.608285590533335 minutes Sep 12 17:10:04.430661 tar[1999]: linux-arm64/README.md Sep 12 17:10:04.466827 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:10:04.797824 sshd_keygen[2019]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:10:04.842456 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:10:04.855887 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:10:04.860848 systemd[1]: Started sshd@0-172.31.21.20:22-147.75.109.163:34446.service - OpenSSH per-connection server daemon (147.75.109.163:34446). Sep 12 17:10:04.880844 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:10:04.881431 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:10:04.896595 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:10:04.932978 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:10:04.950019 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:10:04.957774 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:10:04.961453 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:10:05.078546 sshd[2221]: Accepted publickey for core from 147.75.109.163 port 34446 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:05.082500 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:05.106588 systemd-logind[1992]: New session 1 of user core. Sep 12 17:10:05.108498 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:10:05.119528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:10:05.157149 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:10:05.169994 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:10:05.188128 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:10:05.278440 ntpd[1987]: Listen normally on 6 eth0 [fe80::42a:ccff:fe98:18e7%2]:123 Sep 12 17:10:05.279096 ntpd[1987]: 12 Sep 17:10:05 ntpd[1987]: Listen normally on 6 eth0 [fe80::42a:ccff:fe98:18e7%2]:123 Sep 12 17:10:05.422587 systemd[2232]: Queued start job for default target default.target. Sep 12 17:10:05.430860 systemd[2232]: Created slice app.slice - User Application Slice. Sep 12 17:10:05.432679 amazon-ssm-agent[2165]: 2025-09-12 17:10:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:10:05.431532 systemd[2232]: Reached target paths.target - Paths. Sep 12 17:10:05.431567 systemd[2232]: Reached target timers.target - Timers. Sep 12 17:10:05.435793 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:10:05.467749 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:10:05.468015 systemd[2232]: Reached target sockets.target - Sockets. Sep 12 17:10:05.468049 systemd[2232]: Reached target basic.target - Basic System. Sep 12 17:10:05.468160 systemd[2232]: Reached target default.target - Main User Target. Sep 12 17:10:05.468226 systemd[2232]: Startup finished in 267ms. Sep 12 17:10:05.470043 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:10:05.482739 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:10:05.533405 amazon-ssm-agent[2165]: 2025-09-12 17:10:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2239) started Sep 12 17:10:05.634397 amazon-ssm-agent[2165]: 2025-09-12 17:10:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:10:05.650244 systemd[1]: Started sshd@1-172.31.21.20:22-147.75.109.163:34450.service - OpenSSH per-connection server daemon (147.75.109.163:34450). Sep 12 17:10:05.871771 sshd[2250]: Accepted publickey for core from 147.75.109.163 port 34450 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:05.874546 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:05.884433 systemd-logind[1992]: New session 2 of user core. Sep 12 17:10:05.890667 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:10:06.023279 sshd[2250]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:06.028868 systemd[1]: sshd@1-172.31.21.20:22-147.75.109.163:34450.service: Deactivated successfully. Sep 12 17:10:06.032239 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:10:06.035442 systemd-logind[1992]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:10:06.037346 systemd-logind[1992]: Removed session 2. Sep 12 17:10:06.068229 systemd[1]: Started sshd@2-172.31.21.20:22-147.75.109.163:34462.service - OpenSSH per-connection server daemon (147.75.109.163:34462). Sep 12 17:10:06.241585 sshd[2260]: Accepted publickey for core from 147.75.109.163 port 34462 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:06.244179 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:06.251954 systemd-logind[1992]: New session 3 of user core. Sep 12 17:10:06.260604 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:10:06.393677 sshd[2260]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:06.400276 systemd[1]: sshd@2-172.31.21.20:22-147.75.109.163:34462.service: Deactivated successfully. Sep 12 17:10:06.403855 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:10:06.405531 systemd-logind[1992]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:10:06.407775 systemd-logind[1992]: Removed session 3. Sep 12 17:10:06.798467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:06.802273 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:10:06.810049 systemd[1]: Startup finished in 1.204s (kernel) + 9.253s (initrd) + 10.787s (userspace) = 21.245s. Sep 12 17:10:06.825988 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:08.021573 kubelet[2271]: E0912 17:10:08.021480 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:08.025710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:08.026067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:08.026653 systemd[1]: kubelet.service: Consumed 1.403s CPU time. Sep 12 17:10:09.744912 systemd-resolved[1933]: Clock change detected. Flushing caches. Sep 12 17:10:10.455177 systemd[1]: Started sshd@3-172.31.21.20:22-34.224.213.230:19928.service - OpenSSH per-connection server daemon (34.224.213.230:19928). Sep 12 17:10:10.726355 sshd[2283]: Connection closed by 34.224.213.230 port 19928 [preauth] Sep 12 17:10:10.728635 systemd[1]: sshd@3-172.31.21.20:22-34.224.213.230:19928.service: Deactivated successfully. Sep 12 17:10:16.898196 systemd[1]: Started sshd@4-172.31.21.20:22-147.75.109.163:34252.service - OpenSSH per-connection server daemon (147.75.109.163:34252). Sep 12 17:10:17.073223 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 34252 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:17.076034 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:17.083636 systemd-logind[1992]: New session 4 of user core. Sep 12 17:10:17.092960 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:10:17.219030 sshd[2289]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:17.225177 systemd-logind[1992]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:10:17.225539 systemd[1]: sshd@4-172.31.21.20:22-147.75.109.163:34252.service: Deactivated successfully. Sep 12 17:10:17.228502 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:10:17.232337 systemd-logind[1992]: Removed session 4. Sep 12 17:10:17.255153 systemd[1]: Started sshd@5-172.31.21.20:22-147.75.109.163:34258.service - OpenSSH per-connection server daemon (147.75.109.163:34258). Sep 12 17:10:17.436363 sshd[2296]: Accepted publickey for core from 147.75.109.163 port 34258 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:17.439096 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:17.446891 systemd-logind[1992]: New session 5 of user core. Sep 12 17:10:17.458961 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:10:17.578342 sshd[2296]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:17.584733 systemd[1]: sshd@5-172.31.21.20:22-147.75.109.163:34258.service: Deactivated successfully. Sep 12 17:10:17.588010 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:10:17.589202 systemd-logind[1992]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:10:17.591043 systemd-logind[1992]: Removed session 5. Sep 12 17:10:17.613235 systemd[1]: Started sshd@6-172.31.21.20:22-147.75.109.163:34272.service - OpenSSH per-connection server daemon (147.75.109.163:34272). Sep 12 17:10:17.794627 sshd[2303]: Accepted publickey for core from 147.75.109.163 port 34272 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:17.797301 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:17.805035 systemd-logind[1992]: New session 6 of user core. Sep 12 17:10:17.816939 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:10:17.944319 sshd[2303]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:17.951272 systemd-logind[1992]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:10:17.952923 systemd[1]: sshd@6-172.31.21.20:22-147.75.109.163:34272.service: Deactivated successfully. Sep 12 17:10:17.956496 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:10:17.959053 systemd-logind[1992]: Removed session 6. Sep 12 17:10:17.980023 systemd[1]: Started sshd@7-172.31.21.20:22-147.75.109.163:34278.service - OpenSSH per-connection server daemon (147.75.109.163:34278). Sep 12 17:10:18.163439 sshd[2310]: Accepted publickey for core from 147.75.109.163 port 34278 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:18.166218 sshd[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:18.173809 systemd-logind[1992]: New session 7 of user core. Sep 12 17:10:18.181911 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:10:18.336817 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:10:18.337477 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:18.367366 sudo[2313]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:18.390843 sshd[2310]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:18.395864 systemd[1]: sshd@7-172.31.21.20:22-147.75.109.163:34278.service: Deactivated successfully. Sep 12 17:10:18.398590 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:10:18.401626 systemd-logind[1992]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:10:18.404300 systemd-logind[1992]: Removed session 7. Sep 12 17:10:18.428197 systemd[1]: Started sshd@8-172.31.21.20:22-147.75.109.163:34284.service - OpenSSH per-connection server daemon (147.75.109.163:34284). Sep 12 17:10:18.527066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:10:18.535076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:18.610194 sshd[2318]: Accepted publickey for core from 147.75.109.163 port 34284 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:18.613820 sshd[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:18.623833 systemd-logind[1992]: New session 8 of user core. Sep 12 17:10:18.636620 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:10:18.745141 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:10:18.746329 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:18.754333 sudo[2325]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:18.765599 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:10:18.766876 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:18.796901 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:18.802417 auditctl[2328]: No rules Sep 12 17:10:18.804589 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:10:18.806793 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:18.816336 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:18.887266 augenrules[2348]: No rules Sep 12 17:10:18.892191 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:18.896628 sudo[2324]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:18.918986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:18.921153 sshd[2318]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:18.928227 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:18.928234 systemd[1]: sshd@8-172.31.21.20:22-147.75.109.163:34284.service: Deactivated successfully. Sep 12 17:10:18.931638 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:10:18.936814 systemd-logind[1992]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:10:18.939595 systemd-logind[1992]: Removed session 8. Sep 12 17:10:18.962357 systemd[1]: Started sshd@9-172.31.21.20:22-147.75.109.163:34288.service - OpenSSH per-connection server daemon (147.75.109.163:34288). Sep 12 17:10:19.019447 kubelet[2356]: E0912 17:10:19.019360 2356 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:19.026493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:19.026856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:19.144728 sshd[2364]: Accepted publickey for core from 147.75.109.163 port 34288 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:19.146678 sshd[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:19.155521 systemd-logind[1992]: New session 9 of user core. Sep 12 17:10:19.164912 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:10:19.268727 sudo[2369]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:10:19.269393 sudo[2369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:19.915267 (dockerd)[2384]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:10:19.915844 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:10:20.444363 dockerd[2384]: time="2025-09-12T17:10:20.444288365Z" level=info msg="Starting up" Sep 12 17:10:20.658618 dockerd[2384]: time="2025-09-12T17:10:20.658556226Z" level=info msg="Loading containers: start." Sep 12 17:10:20.863976 kernel: Initializing XFRM netlink socket Sep 12 17:10:20.933186 (udev-worker)[2407]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:10:21.027434 systemd-networkd[1932]: docker0: Link UP Sep 12 17:10:21.054084 dockerd[2384]: time="2025-09-12T17:10:21.054013168Z" level=info msg="Loading containers: done." Sep 12 17:10:21.078638 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2954669843-merged.mount: Deactivated successfully. Sep 12 17:10:21.083833 dockerd[2384]: time="2025-09-12T17:10:21.083752084Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:10:21.084002 dockerd[2384]: time="2025-09-12T17:10:21.083910784Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:10:21.084179 dockerd[2384]: time="2025-09-12T17:10:21.084117844Z" level=info msg="Daemon has completed initialization" Sep 12 17:10:21.145812 dockerd[2384]: time="2025-09-12T17:10:21.143895196Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:10:21.146084 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:10:22.564403 containerd[2010]: time="2025-09-12T17:10:22.563952475Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:10:23.206817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337355489.mount: Deactivated successfully. Sep 12 17:10:24.651807 containerd[2010]: time="2025-09-12T17:10:24.651746326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:24.655279 containerd[2010]: time="2025-09-12T17:10:24.655233022Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 12 17:10:24.657746 containerd[2010]: time="2025-09-12T17:10:24.656593738Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:24.662056 containerd[2010]: time="2025-09-12T17:10:24.662002618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:24.664515 containerd[2010]: time="2025-09-12T17:10:24.664445806Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.100414959s" Sep 12 17:10:24.664649 containerd[2010]: time="2025-09-12T17:10:24.664514374Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 17:10:24.665876 containerd[2010]: time="2025-09-12T17:10:24.665803714Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:10:26.023179 containerd[2010]: time="2025-09-12T17:10:26.023112320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:26.025285 containerd[2010]: time="2025-09-12T17:10:26.025206920Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 12 17:10:26.026164 containerd[2010]: time="2025-09-12T17:10:26.026079068Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:26.031990 containerd[2010]: time="2025-09-12T17:10:26.031899284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:26.034391 containerd[2010]: time="2025-09-12T17:10:26.034340804Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.368447906s" Sep 12 17:10:26.034745 containerd[2010]: time="2025-09-12T17:10:26.034545764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 17:10:26.035634 containerd[2010]: time="2025-09-12T17:10:26.035496464Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:10:27.211272 containerd[2010]: time="2025-09-12T17:10:27.211188046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:27.215000 containerd[2010]: time="2025-09-12T17:10:27.213551254Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 12 17:10:27.215000 containerd[2010]: time="2025-09-12T17:10:27.213922342Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:27.220636 containerd[2010]: time="2025-09-12T17:10:27.220571494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:27.223391 containerd[2010]: time="2025-09-12T17:10:27.223321234Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.187741286s" Sep 12 17:10:27.223610 containerd[2010]: time="2025-09-12T17:10:27.223574650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 17:10:27.225443 containerd[2010]: time="2025-09-12T17:10:27.225371146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:10:28.532917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387237191.mount: Deactivated successfully. Sep 12 17:10:29.161555 containerd[2010]: time="2025-09-12T17:10:29.161493036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:29.163707 containerd[2010]: time="2025-09-12T17:10:29.163486068Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 12 17:10:29.163707 containerd[2010]: time="2025-09-12T17:10:29.163622184Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:29.167541 containerd[2010]: time="2025-09-12T17:10:29.167453604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:29.169491 containerd[2010]: time="2025-09-12T17:10:29.168944988Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.943507458s" Sep 12 17:10:29.169491 containerd[2010]: time="2025-09-12T17:10:29.169006800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 17:10:29.170516 containerd[2010]: time="2025-09-12T17:10:29.170050908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:10:29.277156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:10:29.289049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:29.613963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:29.618219 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:29.699688 kubelet[2607]: E0912 17:10:29.698207 2607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:29.702274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:29.702591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:29.781630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185319024.mount: Deactivated successfully. Sep 12 17:10:30.966403 containerd[2010]: time="2025-09-12T17:10:30.966340025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:30.968849 containerd[2010]: time="2025-09-12T17:10:30.968778101Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 17:10:30.968975 containerd[2010]: time="2025-09-12T17:10:30.968938577Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:30.977490 containerd[2010]: time="2025-09-12T17:10:30.976593017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:30.979062 containerd[2010]: time="2025-09-12T17:10:30.978995393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.808868657s" Sep 12 17:10:30.979195 containerd[2010]: time="2025-09-12T17:10:30.979060853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:10:30.980882 containerd[2010]: time="2025-09-12T17:10:30.980813345Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:10:31.476089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751544547.mount: Deactivated successfully. Sep 12 17:10:31.493049 containerd[2010]: time="2025-09-12T17:10:31.492969064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:31.495350 containerd[2010]: time="2025-09-12T17:10:31.494941720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:10:31.497670 containerd[2010]: time="2025-09-12T17:10:31.497588368Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:31.503768 containerd[2010]: time="2025-09-12T17:10:31.503711920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:31.505626 containerd[2010]: time="2025-09-12T17:10:31.505378816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 524.278239ms" Sep 12 17:10:31.505626 containerd[2010]: time="2025-09-12T17:10:31.505436548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:10:31.506137 containerd[2010]: time="2025-09-12T17:10:31.506008660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:10:32.093405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546624039.mount: Deactivated successfully. Sep 12 17:10:33.475822 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:10:34.237728 containerd[2010]: time="2025-09-12T17:10:34.236473769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:34.239624 containerd[2010]: time="2025-09-12T17:10:34.239553605Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 12 17:10:34.241960 containerd[2010]: time="2025-09-12T17:10:34.241878281Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:34.249935 containerd[2010]: time="2025-09-12T17:10:34.249882017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:34.252220 containerd[2010]: time="2025-09-12T17:10:34.251599745Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.745543973s" Sep 12 17:10:34.252220 containerd[2010]: time="2025-09-12T17:10:34.251698541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 17:10:39.777606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:10:39.790832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:40.155083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:40.165297 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:40.246698 kubelet[2756]: E0912 17:10:40.244460 2756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:40.248961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:40.249285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:41.392186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:41.404162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:41.464621 systemd[1]: Reloading requested from client PID 2770 ('systemctl') (unit session-9.scope)... Sep 12 17:10:41.464675 systemd[1]: Reloading... Sep 12 17:10:41.717695 zram_generator::config[2813]: No configuration found. Sep 12 17:10:41.955326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:42.128275 systemd[1]: Reloading finished in 662 ms. Sep 12 17:10:42.211065 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:10:42.211231 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:10:42.211759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:42.229233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:42.546971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:42.569221 (kubelet)[2872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:42.641841 kubelet[2872]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:42.642343 kubelet[2872]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:42.642427 kubelet[2872]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:42.642700 kubelet[2872]: I0912 17:10:42.642626 2872 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:43.816276 kubelet[2872]: I0912 17:10:43.816202 2872 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:10:43.816276 kubelet[2872]: I0912 17:10:43.816255 2872 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:43.816935 kubelet[2872]: I0912 17:10:43.816743 2872 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:10:43.859921 kubelet[2872]: E0912 17:10:43.859834 2872 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:43.863299 kubelet[2872]: I0912 17:10:43.863092 2872 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:43.872948 kubelet[2872]: E0912 17:10:43.872900 2872 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:43.873576 kubelet[2872]: I0912 17:10:43.873162 2872 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:43.882089 kubelet[2872]: I0912 17:10:43.882027 2872 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:43.882585 kubelet[2872]: I0912 17:10:43.882517 2872 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:43.883048 kubelet[2872]: I0912 17:10:43.882577 2872 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:10:43.883221 kubelet[2872]: I0912 17:10:43.883203 2872 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:43.883273 kubelet[2872]: I0912 17:10:43.883228 2872 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:10:43.884016 kubelet[2872]: I0912 17:10:43.883593 2872 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:43.891016 kubelet[2872]: I0912 17:10:43.890421 2872 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:10:43.891016 kubelet[2872]: I0912 17:10:43.890481 2872 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:43.891016 kubelet[2872]: I0912 17:10:43.890519 2872 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:10:43.891016 kubelet[2872]: I0912 17:10:43.890539 2872 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:43.896301 kubelet[2872]: W0912 17:10:43.896217 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:43.896454 kubelet[2872]: E0912 17:10:43.896326 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:43.896516 kubelet[2872]: W0912 17:10:43.896463 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-20&limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:43.896587 kubelet[2872]: E0912 17:10:43.896518 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-20&limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:43.898728 kubelet[2872]: I0912 17:10:43.897171 2872 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:43.898728 kubelet[2872]: I0912 17:10:43.898204 2872 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:10:43.898728 kubelet[2872]: W0912 17:10:43.898431 2872 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:10:43.900847 kubelet[2872]: I0912 17:10:43.900798 2872 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:10:43.900985 kubelet[2872]: I0912 17:10:43.900860 2872 server.go:1287] "Started kubelet" Sep 12 17:10:43.911046 kubelet[2872]: E0912 17:10:43.910328 2872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.20:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-20.1864982614769ae1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-20,UID:ip-172-31-21-20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-20,},FirstTimestamp:2025-09-12 17:10:43.900832481 +0000 UTC m=+1.324544371,LastTimestamp:2025-09-12 17:10:43.900832481 +0000 UTC m=+1.324544371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-20,}" Sep 12 17:10:43.911046 kubelet[2872]: I0912 17:10:43.910986 2872 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:43.917101 kubelet[2872]: E0912 17:10:43.916966 2872 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:10:43.920183 kubelet[2872]: I0912 17:10:43.920120 2872 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:43.921690 kubelet[2872]: I0912 17:10:43.921623 2872 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:10:43.922085 kubelet[2872]: I0912 17:10:43.922060 2872 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:10:43.922876 kubelet[2872]: E0912 17:10:43.922830 2872 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-20\" not found" Sep 12 17:10:43.927022 kubelet[2872]: I0912 17:10:43.926922 2872 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:43.927543 kubelet[2872]: I0912 17:10:43.927518 2872 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:43.928062 kubelet[2872]: I0912 17:10:43.928029 2872 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:43.928988 kubelet[2872]: I0912 17:10:43.928956 2872 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:10:43.929402 kubelet[2872]: I0912 17:10:43.929354 2872 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:43.931170 kubelet[2872]: I0912 17:10:43.931111 2872 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:10:43.931295 kubelet[2872]: I0912 17:10:43.931228 2872 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:43.933801 kubelet[2872]: I0912 17:10:43.933752 2872 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:10:43.946279 kubelet[2872]: I0912 17:10:43.946082 2872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:43.946279 kubelet[2872]: E0912 17:10:43.946162 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-20?timeout=10s\": dial tcp 172.31.21.20:6443: connect: connection refused" interval="200ms" Sep 12 17:10:43.949725 kubelet[2872]: I0912 17:10:43.949614 2872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:43.950052 kubelet[2872]: I0912 17:10:43.949927 2872 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:10:43.950052 kubelet[2872]: I0912 17:10:43.949975 2872 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:10:43.950052 kubelet[2872]: I0912 17:10:43.949992 2872 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:10:43.950221 kubelet[2872]: E0912 17:10:43.950072 2872 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:43.963305 kubelet[2872]: W0912 17:10:43.963188 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:43.963305 kubelet[2872]: E0912 17:10:43.963295 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:43.967877 kubelet[2872]: W0912 17:10:43.967589 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:43.967877 kubelet[2872]: E0912 17:10:43.967882 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:43.978775 kubelet[2872]: I0912 17:10:43.978720 2872 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:10:43.978775 kubelet[2872]: I0912 17:10:43.978757 2872 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:43.979064 kubelet[2872]: I0912 17:10:43.978791 2872 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:43.981508 kubelet[2872]: I0912 17:10:43.981467 2872 policy_none.go:49] "None policy: Start" Sep 12 17:10:43.981508 kubelet[2872]: I0912 17:10:43.981513 2872 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:10:43.981770 kubelet[2872]: I0912 17:10:43.981539 2872 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:43.998086 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:10:44.015727 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:10:44.023682 kubelet[2872]: E0912 17:10:44.023610 2872 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-20\" not found" Sep 12 17:10:44.032928 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:10:44.036369 kubelet[2872]: I0912 17:10:44.035569 2872 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:10:44.036369 kubelet[2872]: I0912 17:10:44.035891 2872 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:44.036369 kubelet[2872]: I0912 17:10:44.035911 2872 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:44.036369 kubelet[2872]: I0912 17:10:44.036250 2872 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:44.039583 kubelet[2872]: E0912 17:10:44.039531 2872 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:10:44.040014 kubelet[2872]: E0912 17:10:44.039988 2872 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-20\" not found" Sep 12 17:10:44.069568 systemd[1]: Created slice kubepods-burstable-podb33c02339eb707108888b4e0a8aceca7.slice - libcontainer container kubepods-burstable-podb33c02339eb707108888b4e0a8aceca7.slice. Sep 12 17:10:44.092776 kubelet[2872]: E0912 17:10:44.092620 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:44.101675 systemd[1]: Created slice kubepods-burstable-pod496e1dc6fa0400455c213fa68ad3ed1c.slice - libcontainer container kubepods-burstable-pod496e1dc6fa0400455c213fa68ad3ed1c.slice. Sep 12 17:10:44.107732 kubelet[2872]: E0912 17:10:44.107264 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:44.111639 systemd[1]: Created slice kubepods-burstable-pod1247aaf3e610c3f48883a9ba30ba26a3.slice - libcontainer container kubepods-burstable-pod1247aaf3e610c3f48883a9ba30ba26a3.slice. Sep 12 17:10:44.116089 kubelet[2872]: E0912 17:10:44.116039 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:44.137994 kubelet[2872]: I0912 17:10:44.137804 2872 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-20" Sep 12 17:10:44.138448 kubelet[2872]: E0912 17:10:44.138402 2872 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.20:6443/api/v1/nodes\": dial tcp 172.31.21.20:6443: connect: connection refused" node="ip-172-31-21-20" Sep 12 17:10:44.146937 kubelet[2872]: E0912 17:10:44.146891 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-20?timeout=10s\": dial tcp 172.31.21.20:6443: connect: connection refused" interval="400ms" Sep 12 17:10:44.232857 kubelet[2872]: I0912 17:10:44.232759 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b33c02339eb707108888b4e0a8aceca7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-20\" (UID: \"b33c02339eb707108888b4e0a8aceca7\") " pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:44.232857 kubelet[2872]: I0912 17:10:44.232825 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:44.233139 kubelet[2872]: I0912 17:10:44.232863 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:44.233139 kubelet[2872]: I0912 17:10:44.232901 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:44.233139 kubelet[2872]: I0912 17:10:44.232942 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b33c02339eb707108888b4e0a8aceca7-ca-certs\") pod \"kube-apiserver-ip-172-31-21-20\" (UID: \"b33c02339eb707108888b4e0a8aceca7\") " pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:44.233139 kubelet[2872]: I0912 17:10:44.232979 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b33c02339eb707108888b4e0a8aceca7-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-20\" (UID: \"b33c02339eb707108888b4e0a8aceca7\") " pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:44.233139 kubelet[2872]: I0912 17:10:44.233018 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:44.233388 kubelet[2872]: I0912 17:10:44.233054 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:44.233388 kubelet[2872]: I0912 17:10:44.233093 2872 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1247aaf3e610c3f48883a9ba30ba26a3-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-20\" (UID: \"1247aaf3e610c3f48883a9ba30ba26a3\") " pod="kube-system/kube-scheduler-ip-172-31-21-20" Sep 12 17:10:44.341386 kubelet[2872]: I0912 17:10:44.340853 2872 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-20" Sep 12 17:10:44.341386 kubelet[2872]: E0912 17:10:44.341291 2872 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.20:6443/api/v1/nodes\": dial tcp 172.31.21.20:6443: connect: connection refused" node="ip-172-31-21-20" Sep 12 17:10:44.395596 containerd[2010]: time="2025-09-12T17:10:44.395485972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-20,Uid:b33c02339eb707108888b4e0a8aceca7,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:44.408669 containerd[2010]: time="2025-09-12T17:10:44.408593296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-20,Uid:496e1dc6fa0400455c213fa68ad3ed1c,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:44.418647 containerd[2010]: time="2025-09-12T17:10:44.418314232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-20,Uid:1247aaf3e610c3f48883a9ba30ba26a3,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:44.548416 kubelet[2872]: E0912 17:10:44.548283 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-20?timeout=10s\": dial tcp 172.31.21.20:6443: connect: connection refused" interval="800ms" Sep 12 17:10:44.743570 kubelet[2872]: I0912 17:10:44.743420 2872 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-20" Sep 12 17:10:44.744417 kubelet[2872]: E0912 17:10:44.744222 2872 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.20:6443/api/v1/nodes\": dial tcp 172.31.21.20:6443: connect: connection refused" node="ip-172-31-21-20" Sep 12 17:10:44.751962 kubelet[2872]: W0912 17:10:44.751821 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-20&limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:44.751962 kubelet[2872]: E0912 17:10:44.751912 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-20&limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:44.925800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789917334.mount: Deactivated successfully. Sep 12 17:10:44.931389 containerd[2010]: time="2025-09-12T17:10:44.931269150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:44.935711 containerd[2010]: time="2025-09-12T17:10:44.935631738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:10:44.936609 containerd[2010]: time="2025-09-12T17:10:44.936546690Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:44.937086 kubelet[2872]: W0912 17:10:44.936880 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:44.937593 kubelet[2872]: E0912 17:10:44.937107 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:44.940460 containerd[2010]: time="2025-09-12T17:10:44.939718518Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:44.940460 containerd[2010]: time="2025-09-12T17:10:44.940310502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:44.942015 containerd[2010]: time="2025-09-12T17:10:44.941958522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:44.942548 containerd[2010]: time="2025-09-12T17:10:44.942479790Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:44.950786 containerd[2010]: time="2025-09-12T17:10:44.950702886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:44.952986 containerd[2010]: time="2025-09-12T17:10:44.952699338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.015894ms" Sep 12 17:10:44.963113 containerd[2010]: time="2025-09-12T17:10:44.963034302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.28341ms" Sep 12 17:10:44.970310 containerd[2010]: time="2025-09-12T17:10:44.969988326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.561906ms" Sep 12 17:10:45.234859 kubelet[2872]: W0912 17:10:45.234760 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:45.235055 kubelet[2872]: E0912 17:10:45.234867 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:45.255937 containerd[2010]: time="2025-09-12T17:10:45.253543276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:45.255937 containerd[2010]: time="2025-09-12T17:10:45.253641004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:45.255937 containerd[2010]: time="2025-09-12T17:10:45.253731592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:45.255937 containerd[2010]: time="2025-09-12T17:10:45.253920328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:45.263276 kubelet[2872]: W0912 17:10:45.263170 2872 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.20:6443: connect: connection refused Sep 12 17:10:45.263408 kubelet[2872]: E0912 17:10:45.263278 2872 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:45.269478 containerd[2010]: time="2025-09-12T17:10:45.268170544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:45.269478 containerd[2010]: time="2025-09-12T17:10:45.268285384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:45.269478 containerd[2010]: time="2025-09-12T17:10:45.268900972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:45.270834 containerd[2010]: time="2025-09-12T17:10:45.270630808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:45.274903 containerd[2010]: time="2025-09-12T17:10:45.274594492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:45.274903 containerd[2010]: time="2025-09-12T17:10:45.274766524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:45.274903 containerd[2010]: time="2025-09-12T17:10:45.274835932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:45.275675 containerd[2010]: time="2025-09-12T17:10:45.275424640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:45.319011 systemd[1]: Started cri-containerd-69b9811643ad96ca5bfccddb8c90334b62f802973bc14754e7bdfc9318f865b1.scope - libcontainer container 69b9811643ad96ca5bfccddb8c90334b62f802973bc14754e7bdfc9318f865b1. Sep 12 17:10:45.334472 systemd[1]: Started cri-containerd-f6a5d068996ef51677e8fd35df3f62efae49be01d15332c39987e9e17acf16c9.scope - libcontainer container f6a5d068996ef51677e8fd35df3f62efae49be01d15332c39987e9e17acf16c9. Sep 12 17:10:45.350967 kubelet[2872]: E0912 17:10:45.350379 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-20?timeout=10s\": dial tcp 172.31.21.20:6443: connect: connection refused" interval="1.6s" Sep 12 17:10:45.352004 systemd[1]: Started cri-containerd-85e2826a43a2c2a7fe0078ee5ce668dc5ac491c233990aaddad74bf4da51bc13.scope - libcontainer container 85e2826a43a2c2a7fe0078ee5ce668dc5ac491c233990aaddad74bf4da51bc13. Sep 12 17:10:45.446700 containerd[2010]: time="2025-09-12T17:10:45.446043077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-20,Uid:b33c02339eb707108888b4e0a8aceca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"69b9811643ad96ca5bfccddb8c90334b62f802973bc14754e7bdfc9318f865b1\"" Sep 12 17:10:45.464933 containerd[2010]: time="2025-09-12T17:10:45.464859425Z" level=info msg="CreateContainer within sandbox \"69b9811643ad96ca5bfccddb8c90334b62f802973bc14754e7bdfc9318f865b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:10:45.476518 containerd[2010]: time="2025-09-12T17:10:45.476448593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-20,Uid:1247aaf3e610c3f48883a9ba30ba26a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"85e2826a43a2c2a7fe0078ee5ce668dc5ac491c233990aaddad74bf4da51bc13\"" Sep 12 17:10:45.484850 containerd[2010]: time="2025-09-12T17:10:45.484797929Z" level=info msg="CreateContainer within sandbox \"85e2826a43a2c2a7fe0078ee5ce668dc5ac491c233990aaddad74bf4da51bc13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:10:45.486411 containerd[2010]: time="2025-09-12T17:10:45.485330921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-20,Uid:496e1dc6fa0400455c213fa68ad3ed1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6a5d068996ef51677e8fd35df3f62efae49be01d15332c39987e9e17acf16c9\"" Sep 12 17:10:45.493576 containerd[2010]: time="2025-09-12T17:10:45.493243961Z" level=info msg="CreateContainer within sandbox \"f6a5d068996ef51677e8fd35df3f62efae49be01d15332c39987e9e17acf16c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:10:45.495907 containerd[2010]: time="2025-09-12T17:10:45.495825473Z" level=info msg="CreateContainer within sandbox \"69b9811643ad96ca5bfccddb8c90334b62f802973bc14754e7bdfc9318f865b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"16aed41bcb35e348b465774e93750d9be4f623966071631cc1cd02378fbd4d43\"" Sep 12 17:10:45.498694 containerd[2010]: time="2025-09-12T17:10:45.496802273Z" level=info msg="StartContainer for \"16aed41bcb35e348b465774e93750d9be4f623966071631cc1cd02378fbd4d43\"" Sep 12 17:10:45.520634 containerd[2010]: time="2025-09-12T17:10:45.520573865Z" level=info msg="CreateContainer within sandbox \"85e2826a43a2c2a7fe0078ee5ce668dc5ac491c233990aaddad74bf4da51bc13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233\"" Sep 12 17:10:45.521700 containerd[2010]: time="2025-09-12T17:10:45.521623313Z" level=info msg="StartContainer for \"818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233\"" Sep 12 17:10:45.525843 containerd[2010]: time="2025-09-12T17:10:45.525785105Z" level=info msg="CreateContainer within sandbox \"f6a5d068996ef51677e8fd35df3f62efae49be01d15332c39987e9e17acf16c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804\"" Sep 12 17:10:45.527046 containerd[2010]: time="2025-09-12T17:10:45.527000897Z" level=info msg="StartContainer for \"6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804\"" Sep 12 17:10:45.547839 kubelet[2872]: I0912 17:10:45.547780 2872 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-20" Sep 12 17:10:45.548394 kubelet[2872]: E0912 17:10:45.548334 2872 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.20:6443/api/v1/nodes\": dial tcp 172.31.21.20:6443: connect: connection refused" node="ip-172-31-21-20" Sep 12 17:10:45.564576 systemd[1]: Started cri-containerd-16aed41bcb35e348b465774e93750d9be4f623966071631cc1cd02378fbd4d43.scope - libcontainer container 16aed41bcb35e348b465774e93750d9be4f623966071631cc1cd02378fbd4d43. Sep 12 17:10:45.618374 systemd[1]: Started cri-containerd-818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233.scope - libcontainer container 818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233. Sep 12 17:10:45.639096 systemd[1]: Started cri-containerd-6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804.scope - libcontainer container 6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804. Sep 12 17:10:45.697602 containerd[2010]: time="2025-09-12T17:10:45.697524798Z" level=info msg="StartContainer for \"16aed41bcb35e348b465774e93750d9be4f623966071631cc1cd02378fbd4d43\" returns successfully" Sep 12 17:10:45.741746 containerd[2010]: time="2025-09-12T17:10:45.741174450Z" level=info msg="StartContainer for \"6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804\" returns successfully" Sep 12 17:10:45.802734 containerd[2010]: time="2025-09-12T17:10:45.802526431Z" level=info msg="StartContainer for \"818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233\" returns successfully" Sep 12 17:10:45.877704 kubelet[2872]: E0912 17:10:45.877605 2872 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.20:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:45.989153 kubelet[2872]: E0912 17:10:45.988865 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:45.994409 kubelet[2872]: E0912 17:10:45.994088 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:46.001686 kubelet[2872]: E0912 17:10:46.001165 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:47.004484 kubelet[2872]: E0912 17:10:47.003592 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:47.004484 kubelet[2872]: E0912 17:10:47.004172 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:47.151244 kubelet[2872]: I0912 17:10:47.151207 2872 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-20" Sep 12 17:10:48.007867 kubelet[2872]: E0912 17:10:48.007492 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:48.035699 update_engine[1993]: I20250912 17:10:48.033697 1993 update_attempter.cc:509] Updating boot flags... Sep 12 17:10:48.183700 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3162) Sep 12 17:10:48.587142 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3163) Sep 12 17:10:49.781510 kubelet[2872]: E0912 17:10:49.781244 2872 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:50.459433 kubelet[2872]: E0912 17:10:50.459360 2872 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-20\" not found" node="ip-172-31-21-20" Sep 12 17:10:50.613906 kubelet[2872]: E0912 17:10:50.612413 2872 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-20.1864982614769ae1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-20,UID:ip-172-31-21-20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-20,},FirstTimestamp:2025-09-12 17:10:43.900832481 +0000 UTC m=+1.324544371,LastTimestamp:2025-09-12 17:10:43.900832481 +0000 UTC m=+1.324544371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-20,}" Sep 12 17:10:50.666620 kubelet[2872]: I0912 17:10:50.666267 2872 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-20" Sep 12 17:10:50.724640 kubelet[2872]: I0912 17:10:50.724502 2872 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:50.760616 kubelet[2872]: E0912 17:10:50.760330 2872 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-20.18649826156c7069 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-20,UID:ip-172-31-21-20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-21-20,},FirstTimestamp:2025-09-12 17:10:43.916943465 +0000 UTC m=+1.340655355,LastTimestamp:2025-09-12 17:10:43.916943465 +0000 UTC m=+1.340655355,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-20,}" Sep 12 17:10:50.781252 kubelet[2872]: E0912 17:10:50.780966 2872 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-20\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:50.781252 kubelet[2872]: I0912 17:10:50.781020 2872 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-20" Sep 12 17:10:50.786697 kubelet[2872]: E0912 17:10:50.786344 2872 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-20\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-20" Sep 12 17:10:50.786697 kubelet[2872]: I0912 17:10:50.786392 2872 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:50.791780 kubelet[2872]: E0912 17:10:50.791726 2872 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-20\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:50.901028 kubelet[2872]: I0912 17:10:50.900710 2872 apiserver.go:52] "Watching apiserver" Sep 12 17:10:50.931738 kubelet[2872]: I0912 17:10:50.931677 2872 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:10:52.548389 systemd[1]: Reloading requested from client PID 3331 ('systemctl') (unit session-9.scope)... Sep 12 17:10:52.548923 systemd[1]: Reloading... Sep 12 17:10:52.626994 kubelet[2872]: I0912 17:10:52.626946 2872 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:52.728708 zram_generator::config[3371]: No configuration found. Sep 12 17:10:52.962361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:53.168097 systemd[1]: Reloading finished in 618 ms. Sep 12 17:10:53.242344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:53.259409 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:10:53.259904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:53.260014 systemd[1]: kubelet.service: Consumed 2.113s CPU time, 129.4M memory peak, 0B memory swap peak. Sep 12 17:10:53.269222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:53.615955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:53.616266 (kubelet)[3431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:53.727458 kubelet[3431]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:53.728479 kubelet[3431]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:53.728617 kubelet[3431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:53.729365 kubelet[3431]: I0912 17:10:53.728877 3431 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:53.765111 kubelet[3431]: I0912 17:10:53.765047 3431 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:10:53.765111 kubelet[3431]: I0912 17:10:53.765097 3431 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:53.765736 kubelet[3431]: I0912 17:10:53.765620 3431 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:10:53.768505 kubelet[3431]: I0912 17:10:53.768451 3431 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:10:53.774435 kubelet[3431]: I0912 17:10:53.773649 3431 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:53.782134 kubelet[3431]: E0912 17:10:53.782057 3431 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:53.782134 kubelet[3431]: I0912 17:10:53.782128 3431 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:53.783364 sudo[3445]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:10:53.784342 sudo[3445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:10:53.792112 kubelet[3431]: I0912 17:10:53.792062 3431 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:53.793077 kubelet[3431]: I0912 17:10:53.792520 3431 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:53.793077 kubelet[3431]: I0912 17:10:53.792581 3431 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:10:53.793077 kubelet[3431]: I0912 17:10:53.792957 3431 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:53.793077 kubelet[3431]: I0912 17:10:53.792979 3431 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:10:53.794033 kubelet[3431]: I0912 17:10:53.793065 3431 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:53.794033 kubelet[3431]: I0912 17:10:53.793296 3431 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:10:53.794033 kubelet[3431]: I0912 17:10:53.793318 3431 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:53.794033 kubelet[3431]: I0912 17:10:53.793367 3431 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:10:53.794033 kubelet[3431]: I0912 17:10:53.793390 3431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:53.801485 kubelet[3431]: I0912 17:10:53.799942 3431 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:53.801485 kubelet[3431]: I0912 17:10:53.800719 3431 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:10:53.801485 kubelet[3431]: I0912 17:10:53.801472 3431 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:10:53.801774 kubelet[3431]: I0912 17:10:53.801520 3431 server.go:1287] "Started kubelet" Sep 12 17:10:53.813424 kubelet[3431]: I0912 17:10:53.813344 3431 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:53.817229 kubelet[3431]: I0912 17:10:53.817174 3431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:53.819356 kubelet[3431]: I0912 17:10:53.819298 3431 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:10:53.830709 kubelet[3431]: I0912 17:10:53.829580 3431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:53.836677 kubelet[3431]: I0912 17:10:53.832568 3431 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:10:53.836677 kubelet[3431]: I0912 17:10:53.832550 3431 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:53.836677 kubelet[3431]: I0912 17:10:53.832961 3431 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:53.836677 kubelet[3431]: E0912 17:10:53.832976 3431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-20\" not found" Sep 12 17:10:53.836677 kubelet[3431]: I0912 17:10:53.833810 3431 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:10:53.836677 kubelet[3431]: I0912 17:10:53.834041 3431 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:53.881803 kubelet[3431]: I0912 17:10:53.881102 3431 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:10:53.883824 kubelet[3431]: I0912 17:10:53.883783 3431 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:53.920123 kubelet[3431]: I0912 17:10:53.920077 3431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:53.926470 kubelet[3431]: I0912 17:10:53.926425 3431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:53.929522 kubelet[3431]: I0912 17:10:53.929480 3431 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:10:53.929751 kubelet[3431]: I0912 17:10:53.929728 3431 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:10:53.929886 kubelet[3431]: I0912 17:10:53.929866 3431 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:10:53.931180 kubelet[3431]: E0912 17:10:53.930055 3431 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:53.931633 kubelet[3431]: I0912 17:10:53.931585 3431 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:10:54.031614 kubelet[3431]: E0912 17:10:54.030126 3431 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085115 3431 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085155 3431 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085191 3431 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085456 3431 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085476 3431 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085523 3431 policy_none.go:49] "None policy: Start" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085540 3431 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.085561 3431 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:54.086462 kubelet[3431]: I0912 17:10:54.086090 3431 state_mem.go:75] "Updated machine memory state" Sep 12 17:10:54.099571 kubelet[3431]: I0912 17:10:54.099519 3431 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:10:54.104288 kubelet[3431]: I0912 17:10:54.104242 3431 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:54.104405 kubelet[3431]: I0912 17:10:54.104278 3431 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:54.104946 kubelet[3431]: I0912 17:10:54.104757 3431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:54.116681 kubelet[3431]: E0912 17:10:54.116504 3431 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:10:54.228306 kubelet[3431]: I0912 17:10:54.228150 3431 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-20" Sep 12 17:10:54.231429 kubelet[3431]: I0912 17:10:54.231371 3431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:54.232806 kubelet[3431]: I0912 17:10:54.232759 3431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-20" Sep 12 17:10:54.233797 kubelet[3431]: I0912 17:10:54.233339 3431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:54.238711 kubelet[3431]: I0912 17:10:54.236907 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b33c02339eb707108888b4e0a8aceca7-ca-certs\") pod \"kube-apiserver-ip-172-31-21-20\" (UID: \"b33c02339eb707108888b4e0a8aceca7\") " pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:54.238711 kubelet[3431]: I0912 17:10:54.236970 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b33c02339eb707108888b4e0a8aceca7-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-20\" (UID: \"b33c02339eb707108888b4e0a8aceca7\") " pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:54.238711 kubelet[3431]: I0912 17:10:54.237014 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:54.238711 kubelet[3431]: I0912 17:10:54.237050 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:54.238711 kubelet[3431]: I0912 17:10:54.237091 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:54.239106 kubelet[3431]: I0912 17:10:54.237133 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b33c02339eb707108888b4e0a8aceca7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-20\" (UID: \"b33c02339eb707108888b4e0a8aceca7\") " pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:54.239106 kubelet[3431]: I0912 17:10:54.237169 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:54.239106 kubelet[3431]: I0912 17:10:54.237204 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/496e1dc6fa0400455c213fa68ad3ed1c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-20\" (UID: \"496e1dc6fa0400455c213fa68ad3ed1c\") " pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:54.239106 kubelet[3431]: I0912 17:10:54.237239 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1247aaf3e610c3f48883a9ba30ba26a3-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-20\" (UID: \"1247aaf3e610c3f48883a9ba30ba26a3\") " pod="kube-system/kube-scheduler-ip-172-31-21-20" Sep 12 17:10:54.254879 kubelet[3431]: I0912 17:10:54.254836 3431 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-20" Sep 12 17:10:54.255322 kubelet[3431]: I0912 17:10:54.255162 3431 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-20" Sep 12 17:10:54.258729 kubelet[3431]: E0912 17:10:54.256885 3431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-20\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:54.701812 sudo[3445]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:54.798189 kubelet[3431]: I0912 17:10:54.797853 3431 apiserver.go:52] "Watching apiserver" Sep 12 17:10:54.834837 kubelet[3431]: I0912 17:10:54.834757 3431 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:10:55.014944 kubelet[3431]: I0912 17:10:55.013983 3431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:55.014944 kubelet[3431]: I0912 17:10:55.014080 3431 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:55.027528 kubelet[3431]: E0912 17:10:55.026390 3431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-20\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-20" Sep 12 17:10:55.030938 kubelet[3431]: E0912 17:10:55.030900 3431 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-20\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-20" Sep 12 17:10:55.104104 kubelet[3431]: I0912 17:10:55.103540 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-20" podStartSLOduration=1.103515373 podStartE2EDuration="1.103515373s" podCreationTimestamp="2025-09-12 17:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:55.082068001 +0000 UTC m=+1.458078261" watchObservedRunningTime="2025-09-12 17:10:55.103515373 +0000 UTC m=+1.479525621" Sep 12 17:10:55.142362 kubelet[3431]: I0912 17:10:55.141207 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-20" podStartSLOduration=1.141184945 podStartE2EDuration="1.141184945s" podCreationTimestamp="2025-09-12 17:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:55.105578053 +0000 UTC m=+1.481588301" watchObservedRunningTime="2025-09-12 17:10:55.141184945 +0000 UTC m=+1.517195157" Sep 12 17:10:55.168789 kubelet[3431]: I0912 17:10:55.168081 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-20" podStartSLOduration=3.168058909 podStartE2EDuration="3.168058909s" podCreationTimestamp="2025-09-12 17:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:55.141448141 +0000 UTC m=+1.517458377" watchObservedRunningTime="2025-09-12 17:10:55.168058909 +0000 UTC m=+1.544069121" Sep 12 17:10:58.877421 sudo[2369]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:58.900196 sshd[2364]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:58.907648 systemd[1]: sshd@9-172.31.21.20:22-147.75.109.163:34288.service: Deactivated successfully. Sep 12 17:10:58.913411 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:10:58.914169 systemd[1]: session-9.scope: Consumed 12.064s CPU time, 150.7M memory peak, 0B memory swap peak. Sep 12 17:10:58.915857 systemd-logind[1992]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:10:58.918196 systemd-logind[1992]: Removed session 9. Sep 12 17:10:59.751956 kubelet[3431]: I0912 17:10:59.751881 3431 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:10:59.753161 containerd[2010]: time="2025-09-12T17:10:59.752722628Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:10:59.755423 kubelet[3431]: I0912 17:10:59.753494 3431 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:11:00.502011 kubelet[3431]: I0912 17:11:00.501216 3431 status_manager.go:890] "Failed to get status for pod" podUID="ec5ce225-2521-45d4-b5cc-b913557b35d8" pod="kube-system/kube-proxy-h9l5m" err="pods \"kube-proxy-h9l5m\" is forbidden: User \"system:node:ip-172-31-21-20\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-20' and this object" Sep 12 17:11:00.502011 kubelet[3431]: W0912 17:11:00.501340 3431 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-21-20" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-20' and this object Sep 12 17:11:00.502011 kubelet[3431]: E0912 17:11:00.501391 3431 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-21-20\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-20' and this object" logger="UnhandledError" Sep 12 17:11:00.503994 kubelet[3431]: W0912 17:11:00.503551 3431 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-21-20" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-20' and this object Sep 12 17:11:00.503994 kubelet[3431]: E0912 17:11:00.503768 3431 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-21-20\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-20' and this object" logger="UnhandledError" Sep 12 17:11:00.515912 systemd[1]: Created slice kubepods-besteffort-podec5ce225_2521_45d4_b5cc_b913557b35d8.slice - libcontainer container kubepods-besteffort-podec5ce225_2521_45d4_b5cc_b913557b35d8.slice. Sep 12 17:11:00.550845 systemd[1]: Created slice kubepods-burstable-podfd95a4b9_bd1d_4c82_b815_853f0badd776.slice - libcontainer container kubepods-burstable-podfd95a4b9_bd1d_4c82_b815_853f0badd776.slice. Sep 12 17:11:00.578221 kubelet[3431]: I0912 17:11:00.577930 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec5ce225-2521-45d4-b5cc-b913557b35d8-kube-proxy\") pod \"kube-proxy-h9l5m\" (UID: \"ec5ce225-2521-45d4-b5cc-b913557b35d8\") " pod="kube-system/kube-proxy-h9l5m" Sep 12 17:11:00.578221 kubelet[3431]: I0912 17:11:00.578001 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec5ce225-2521-45d4-b5cc-b913557b35d8-xtables-lock\") pod \"kube-proxy-h9l5m\" (UID: \"ec5ce225-2521-45d4-b5cc-b913557b35d8\") " pod="kube-system/kube-proxy-h9l5m" Sep 12 17:11:00.578221 kubelet[3431]: I0912 17:11:00.578041 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-cgroup\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578221 kubelet[3431]: I0912 17:11:00.578082 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-etc-cni-netd\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578221 kubelet[3431]: I0912 17:11:00.578119 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec5ce225-2521-45d4-b5cc-b913557b35d8-lib-modules\") pod \"kube-proxy-h9l5m\" (UID: \"ec5ce225-2521-45d4-b5cc-b913557b35d8\") " pod="kube-system/kube-proxy-h9l5m" Sep 12 17:11:00.578644 kubelet[3431]: I0912 17:11:00.578156 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vnsz\" (UniqueName: \"kubernetes.io/projected/ec5ce225-2521-45d4-b5cc-b913557b35d8-kube-api-access-7vnsz\") pod \"kube-proxy-h9l5m\" (UID: \"ec5ce225-2521-45d4-b5cc-b913557b35d8\") " pod="kube-system/kube-proxy-h9l5m" Sep 12 17:11:00.578644 kubelet[3431]: I0912 17:11:00.578197 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-run\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578644 kubelet[3431]: I0912 17:11:00.578232 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-bpf-maps\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578644 kubelet[3431]: I0912 17:11:00.578265 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cni-path\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578644 kubelet[3431]: I0912 17:11:00.578301 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-lib-modules\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578644 kubelet[3431]: I0912 17:11:00.578342 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-config-path\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578997 kubelet[3431]: I0912 17:11:00.578377 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-hostproc\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578997 kubelet[3431]: I0912 17:11:00.578410 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-xtables-lock\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.578997 kubelet[3431]: I0912 17:11:00.578443 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd95a4b9-bd1d-4c82-b815-853f0badd776-clustermesh-secrets\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.682382 kubelet[3431]: I0912 17:11:00.679237 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-kernel\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.682382 kubelet[3431]: I0912 17:11:00.679434 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-hubble-tls\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.682382 kubelet[3431]: I0912 17:11:00.679473 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-net\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.682382 kubelet[3431]: I0912 17:11:00.679511 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hx68\" (UniqueName: \"kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-kube-api-access-8hx68\") pod \"cilium-w2b9c\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " pod="kube-system/cilium-w2b9c" Sep 12 17:11:00.867314 systemd[1]: Created slice kubepods-besteffort-poda4f1da62_a0ef_4269_be79_dbfb68c0d382.slice - libcontainer container kubepods-besteffort-poda4f1da62_a0ef_4269_be79_dbfb68c0d382.slice. Sep 12 17:11:00.874766 kubelet[3431]: I0912 17:11:00.873094 3431 status_manager.go:890] "Failed to get status for pod" podUID="a4f1da62-a0ef-4269-be79-dbfb68c0d382" pod="kube-system/cilium-operator-6c4d7847fc-zzzkg" err="pods \"cilium-operator-6c4d7847fc-zzzkg\" is forbidden: User \"system:node:ip-172-31-21-20\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-20' and this object" Sep 12 17:11:00.981931 kubelet[3431]: I0912 17:11:00.981872 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdc4x\" (UniqueName: \"kubernetes.io/projected/a4f1da62-a0ef-4269-be79-dbfb68c0d382-kube-api-access-qdc4x\") pod \"cilium-operator-6c4d7847fc-zzzkg\" (UID: \"a4f1da62-a0ef-4269-be79-dbfb68c0d382\") " pod="kube-system/cilium-operator-6c4d7847fc-zzzkg" Sep 12 17:11:00.982452 kubelet[3431]: I0912 17:11:00.982377 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4f1da62-a0ef-4269-be79-dbfb68c0d382-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zzzkg\" (UID: \"a4f1da62-a0ef-4269-be79-dbfb68c0d382\") " pod="kube-system/cilium-operator-6c4d7847fc-zzzkg" Sep 12 17:11:01.684758 kubelet[3431]: E0912 17:11:01.684384 3431 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:01.684758 kubelet[3431]: E0912 17:11:01.684495 3431 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec5ce225-2521-45d4-b5cc-b913557b35d8-kube-proxy podName:ec5ce225-2521-45d4-b5cc-b913557b35d8 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:02.184464517 +0000 UTC m=+8.560474741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ec5ce225-2521-45d4-b5cc-b913557b35d8-kube-proxy") pod "kube-proxy-h9l5m" (UID: "ec5ce225-2521-45d4-b5cc-b913557b35d8") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:01.719828 kubelet[3431]: E0912 17:11:01.719677 3431 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:01.719828 kubelet[3431]: E0912 17:11:01.719728 3431 projected.go:194] Error preparing data for projected volume kube-api-access-7vnsz for pod kube-system/kube-proxy-h9l5m: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:01.719828 kubelet[3431]: E0912 17:11:01.719828 3431 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec5ce225-2521-45d4-b5cc-b913557b35d8-kube-api-access-7vnsz podName:ec5ce225-2521-45d4-b5cc-b913557b35d8 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:02.219797186 +0000 UTC m=+8.595807410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7vnsz" (UniqueName: "kubernetes.io/projected/ec5ce225-2521-45d4-b5cc-b913557b35d8-kube-api-access-7vnsz") pod "kube-proxy-h9l5m" (UID: "ec5ce225-2521-45d4-b5cc-b913557b35d8") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:01.825774 kubelet[3431]: E0912 17:11:01.825705 3431 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:01.825774 kubelet[3431]: E0912 17:11:01.825753 3431 projected.go:194] Error preparing data for projected volume kube-api-access-8hx68 for pod kube-system/cilium-w2b9c: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:01.826028 kubelet[3431]: E0912 17:11:01.825844 3431 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-kube-api-access-8hx68 podName:fd95a4b9-bd1d-4c82-b815-853f0badd776 nodeName:}" failed. No retries permitted until 2025-09-12 17:11:02.325816406 +0000 UTC m=+8.701826630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8hx68" (UniqueName: "kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-kube-api-access-8hx68") pod "cilium-w2b9c" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:11:02.077506 containerd[2010]: time="2025-09-12T17:11:02.077425867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zzzkg,Uid:a4f1da62-a0ef-4269-be79-dbfb68c0d382,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:02.117259 containerd[2010]: time="2025-09-12T17:11:02.117086660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:02.117259 containerd[2010]: time="2025-09-12T17:11:02.117210572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:02.117642 containerd[2010]: time="2025-09-12T17:11:02.117250652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:02.117642 containerd[2010]: time="2025-09-12T17:11:02.117435248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:02.157994 systemd[1]: Started cri-containerd-74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97.scope - libcontainer container 74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97. Sep 12 17:11:02.221824 containerd[2010]: time="2025-09-12T17:11:02.221753000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zzzkg,Uid:a4f1da62-a0ef-4269-be79-dbfb68c0d382,Namespace:kube-system,Attempt:0,} returns sandbox id \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\"" Sep 12 17:11:02.225419 containerd[2010]: time="2025-09-12T17:11:02.225348980Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:11:02.333049 containerd[2010]: time="2025-09-12T17:11:02.332442129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9l5m,Uid:ec5ce225-2521-45d4-b5cc-b913557b35d8,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:02.369282 containerd[2010]: time="2025-09-12T17:11:02.368748045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:02.369282 containerd[2010]: time="2025-09-12T17:11:02.368874573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:02.369282 containerd[2010]: time="2025-09-12T17:11:02.368911569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:02.369282 containerd[2010]: time="2025-09-12T17:11:02.369133281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:02.404573 systemd[1]: Started cri-containerd-a005b00b7f21bd82d1b2a13141986d12d7e4ecef27799ed601a65b798c7f8b1c.scope - libcontainer container a005b00b7f21bd82d1b2a13141986d12d7e4ecef27799ed601a65b798c7f8b1c. Sep 12 17:11:02.446251 containerd[2010]: time="2025-09-12T17:11:02.446193489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9l5m,Uid:ec5ce225-2521-45d4-b5cc-b913557b35d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a005b00b7f21bd82d1b2a13141986d12d7e4ecef27799ed601a65b798c7f8b1c\"" Sep 12 17:11:02.454843 containerd[2010]: time="2025-09-12T17:11:02.454760025Z" level=info msg="CreateContainer within sandbox \"a005b00b7f21bd82d1b2a13141986d12d7e4ecef27799ed601a65b798c7f8b1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:11:02.475248 containerd[2010]: time="2025-09-12T17:11:02.475034553Z" level=info msg="CreateContainer within sandbox \"a005b00b7f21bd82d1b2a13141986d12d7e4ecef27799ed601a65b798c7f8b1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"179b2a80a0e64ce83f80a67db07ee67cb61a94b8ee1f65b4d74b612ffc787f84\"" Sep 12 17:11:02.478434 containerd[2010]: time="2025-09-12T17:11:02.477085317Z" level=info msg="StartContainer for \"179b2a80a0e64ce83f80a67db07ee67cb61a94b8ee1f65b4d74b612ffc787f84\"" Sep 12 17:11:02.536459 systemd[1]: Started cri-containerd-179b2a80a0e64ce83f80a67db07ee67cb61a94b8ee1f65b4d74b612ffc787f84.scope - libcontainer container 179b2a80a0e64ce83f80a67db07ee67cb61a94b8ee1f65b4d74b612ffc787f84. Sep 12 17:11:02.589720 containerd[2010]: time="2025-09-12T17:11:02.589523986Z" level=info msg="StartContainer for \"179b2a80a0e64ce83f80a67db07ee67cb61a94b8ee1f65b4d74b612ffc787f84\" returns successfully" Sep 12 17:11:02.662219 containerd[2010]: time="2025-09-12T17:11:02.662148130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w2b9c,Uid:fd95a4b9-bd1d-4c82-b815-853f0badd776,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:02.708686 containerd[2010]: time="2025-09-12T17:11:02.706946219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:02.713209 containerd[2010]: time="2025-09-12T17:11:02.711635399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:02.713695 containerd[2010]: time="2025-09-12T17:11:02.713562491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:02.714595 containerd[2010]: time="2025-09-12T17:11:02.714332579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:02.749979 systemd[1]: Started cri-containerd-16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb.scope - libcontainer container 16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb. Sep 12 17:11:02.812711 containerd[2010]: time="2025-09-12T17:11:02.812536019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w2b9c,Uid:fd95a4b9-bd1d-4c82-b815-853f0badd776,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\"" Sep 12 17:11:03.411735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076656296.mount: Deactivated successfully. Sep 12 17:11:04.002012 containerd[2010]: time="2025-09-12T17:11:04.001937625Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:04.003522 containerd[2010]: time="2025-09-12T17:11:04.003462861Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:11:04.004958 containerd[2010]: time="2025-09-12T17:11:04.004281045Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:04.007461 containerd[2010]: time="2025-09-12T17:11:04.007338429Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.781787693s" Sep 12 17:11:04.007461 containerd[2010]: time="2025-09-12T17:11:04.007407525Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:11:04.011076 containerd[2010]: time="2025-09-12T17:11:04.010554657Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:11:04.014368 containerd[2010]: time="2025-09-12T17:11:04.014315841Z" level=info msg="CreateContainer within sandbox \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:11:04.043331 containerd[2010]: time="2025-09-12T17:11:04.043274949Z" level=info msg="CreateContainer within sandbox \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\"" Sep 12 17:11:04.047107 containerd[2010]: time="2025-09-12T17:11:04.045212685Z" level=info msg="StartContainer for \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\"" Sep 12 17:11:04.108041 systemd[1]: Started cri-containerd-6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077.scope - libcontainer container 6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077. Sep 12 17:11:04.158823 containerd[2010]: time="2025-09-12T17:11:04.158765158Z" level=info msg="StartContainer for \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\" returns successfully" Sep 12 17:11:05.170272 kubelet[3431]: I0912 17:11:05.170177 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9l5m" podStartSLOduration=5.170152643 podStartE2EDuration="5.170152643s" podCreationTimestamp="2025-09-12 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:03.078551696 +0000 UTC m=+9.454561920" watchObservedRunningTime="2025-09-12 17:11:05.170152643 +0000 UTC m=+11.546162867" Sep 12 17:11:05.170944 kubelet[3431]: I0912 17:11:05.170519 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zzzkg" podStartSLOduration=3.385290406 podStartE2EDuration="5.170507771s" podCreationTimestamp="2025-09-12 17:11:00 +0000 UTC" firstStartedPulling="2025-09-12 17:11:02.224276336 +0000 UTC m=+8.600286560" lastFinishedPulling="2025-09-12 17:11:04.009493713 +0000 UTC m=+10.385503925" observedRunningTime="2025-09-12 17:11:05.168993323 +0000 UTC m=+11.545003559" watchObservedRunningTime="2025-09-12 17:11:05.170507771 +0000 UTC m=+11.546517995" Sep 12 17:11:09.820096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837911821.mount: Deactivated successfully. Sep 12 17:11:12.492834 containerd[2010]: time="2025-09-12T17:11:12.491809819Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:12.494050 containerd[2010]: time="2025-09-12T17:11:12.493985059Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:11:12.494396 containerd[2010]: time="2025-09-12T17:11:12.494360611Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:12.497958 containerd[2010]: time="2025-09-12T17:11:12.497896567Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.485140438s" Sep 12 17:11:12.498176 containerd[2010]: time="2025-09-12T17:11:12.498138859Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:11:12.503903 containerd[2010]: time="2025-09-12T17:11:12.503847319Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:11:12.523718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69414093.mount: Deactivated successfully. Sep 12 17:11:12.529736 containerd[2010]: time="2025-09-12T17:11:12.529626463Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf\"" Sep 12 17:11:12.530549 containerd[2010]: time="2025-09-12T17:11:12.530496655Z" level=info msg="StartContainer for \"f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf\"" Sep 12 17:11:12.592048 systemd[1]: Started cri-containerd-f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf.scope - libcontainer container f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf. Sep 12 17:11:12.639516 containerd[2010]: time="2025-09-12T17:11:12.639413888Z" level=info msg="StartContainer for \"f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf\" returns successfully" Sep 12 17:11:12.663467 systemd[1]: cri-containerd-f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf.scope: Deactivated successfully. Sep 12 17:11:13.250476 containerd[2010]: time="2025-09-12T17:11:13.250333759Z" level=info msg="shim disconnected" id=f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf namespace=k8s.io Sep 12 17:11:13.250848 containerd[2010]: time="2025-09-12T17:11:13.250816327Z" level=warning msg="cleaning up after shim disconnected" id=f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf namespace=k8s.io Sep 12 17:11:13.250976 containerd[2010]: time="2025-09-12T17:11:13.250949851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:13.517171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf-rootfs.mount: Deactivated successfully. Sep 12 17:11:14.108162 containerd[2010]: time="2025-09-12T17:11:14.108022135Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:11:14.137354 containerd[2010]: time="2025-09-12T17:11:14.136527295Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c\"" Sep 12 17:11:14.140413 containerd[2010]: time="2025-09-12T17:11:14.138928447Z" level=info msg="StartContainer for \"3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c\"" Sep 12 17:11:14.215980 systemd[1]: Started cri-containerd-3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c.scope - libcontainer container 3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c. Sep 12 17:11:14.277279 containerd[2010]: time="2025-09-12T17:11:14.277210064Z" level=info msg="StartContainer for \"3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c\" returns successfully" Sep 12 17:11:14.310283 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:11:14.310877 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:14.311004 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:11:14.323388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:11:14.329284 systemd[1]: cri-containerd-3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c.scope: Deactivated successfully. Sep 12 17:11:14.367480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:14.392326 containerd[2010]: time="2025-09-12T17:11:14.392227005Z" level=info msg="shim disconnected" id=3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c namespace=k8s.io Sep 12 17:11:14.392326 containerd[2010]: time="2025-09-12T17:11:14.392321301Z" level=warning msg="cleaning up after shim disconnected" id=3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c namespace=k8s.io Sep 12 17:11:14.392897 containerd[2010]: time="2025-09-12T17:11:14.392343897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:14.516570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c-rootfs.mount: Deactivated successfully. Sep 12 17:11:15.115732 containerd[2010]: time="2025-09-12T17:11:15.115301984Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:11:15.144246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292988337.mount: Deactivated successfully. Sep 12 17:11:15.147279 containerd[2010]: time="2025-09-12T17:11:15.147192848Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9\"" Sep 12 17:11:15.154003 containerd[2010]: time="2025-09-12T17:11:15.153938912Z" level=info msg="StartContainer for \"a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9\"" Sep 12 17:11:15.217985 systemd[1]: Started cri-containerd-a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9.scope - libcontainer container a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9. Sep 12 17:11:15.284826 containerd[2010]: time="2025-09-12T17:11:15.284016261Z" level=info msg="StartContainer for \"a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9\" returns successfully" Sep 12 17:11:15.299095 systemd[1]: cri-containerd-a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9.scope: Deactivated successfully. Sep 12 17:11:15.348295 containerd[2010]: time="2025-09-12T17:11:15.348211317Z" level=info msg="shim disconnected" id=a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9 namespace=k8s.io Sep 12 17:11:15.348295 containerd[2010]: time="2025-09-12T17:11:15.348288477Z" level=warning msg="cleaning up after shim disconnected" id=a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9 namespace=k8s.io Sep 12 17:11:15.348741 containerd[2010]: time="2025-09-12T17:11:15.348323109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:15.516524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9-rootfs.mount: Deactivated successfully. Sep 12 17:11:16.122638 containerd[2010]: time="2025-09-12T17:11:16.122418861Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:11:16.159089 containerd[2010]: time="2025-09-12T17:11:16.156222837Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080\"" Sep 12 17:11:16.162375 containerd[2010]: time="2025-09-12T17:11:16.160364505Z" level=info msg="StartContainer for \"c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080\"" Sep 12 17:11:16.240041 systemd[1]: Started cri-containerd-c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080.scope - libcontainer container c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080. Sep 12 17:11:16.304107 systemd[1]: cri-containerd-c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080.scope: Deactivated successfully. Sep 12 17:11:16.304926 containerd[2010]: time="2025-09-12T17:11:16.304835518Z" level=info msg="StartContainer for \"c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080\" returns successfully" Sep 12 17:11:16.354366 containerd[2010]: time="2025-09-12T17:11:16.354241294Z" level=info msg="shim disconnected" id=c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080 namespace=k8s.io Sep 12 17:11:16.354366 containerd[2010]: time="2025-09-12T17:11:16.354351406Z" level=warning msg="cleaning up after shim disconnected" id=c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080 namespace=k8s.io Sep 12 17:11:16.354804 containerd[2010]: time="2025-09-12T17:11:16.354375706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:16.517357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080-rootfs.mount: Deactivated successfully. Sep 12 17:11:17.139213 containerd[2010]: time="2025-09-12T17:11:17.138233662Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:11:17.180262 containerd[2010]: time="2025-09-12T17:11:17.180183406Z" level=info msg="CreateContainer within sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\"" Sep 12 17:11:17.182844 containerd[2010]: time="2025-09-12T17:11:17.181999198Z" level=info msg="StartContainer for \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\"" Sep 12 17:11:17.252982 systemd[1]: Started cri-containerd-fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544.scope - libcontainer container fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544. Sep 12 17:11:17.310716 containerd[2010]: time="2025-09-12T17:11:17.309862931Z" level=info msg="StartContainer for \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\" returns successfully" Sep 12 17:11:17.518203 systemd[1]: run-containerd-runc-k8s.io-fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544-runc.8m3mzG.mount: Deactivated successfully. Sep 12 17:11:17.536710 kubelet[3431]: I0912 17:11:17.536599 3431 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:11:17.608869 kubelet[3431]: I0912 17:11:17.607724 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndmkh\" (UniqueName: \"kubernetes.io/projected/d13552a1-5417-4f25-b799-49c735e18819-kube-api-access-ndmkh\") pod \"coredns-668d6bf9bc-w7lcm\" (UID: \"d13552a1-5417-4f25-b799-49c735e18819\") " pod="kube-system/coredns-668d6bf9bc-w7lcm" Sep 12 17:11:17.608869 kubelet[3431]: I0912 17:11:17.607813 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b7v8\" (UniqueName: \"kubernetes.io/projected/38b14fc4-3ae1-4190-ad4b-44bf84ff02a3-kube-api-access-6b7v8\") pod \"coredns-668d6bf9bc-z6zng\" (UID: \"38b14fc4-3ae1-4190-ad4b-44bf84ff02a3\") " pod="kube-system/coredns-668d6bf9bc-z6zng" Sep 12 17:11:17.608869 kubelet[3431]: I0912 17:11:17.607860 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d13552a1-5417-4f25-b799-49c735e18819-config-volume\") pod \"coredns-668d6bf9bc-w7lcm\" (UID: \"d13552a1-5417-4f25-b799-49c735e18819\") " pod="kube-system/coredns-668d6bf9bc-w7lcm" Sep 12 17:11:17.608869 kubelet[3431]: I0912 17:11:17.607929 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38b14fc4-3ae1-4190-ad4b-44bf84ff02a3-config-volume\") pod \"coredns-668d6bf9bc-z6zng\" (UID: \"38b14fc4-3ae1-4190-ad4b-44bf84ff02a3\") " pod="kube-system/coredns-668d6bf9bc-z6zng" Sep 12 17:11:17.608279 systemd[1]: Created slice kubepods-burstable-pod38b14fc4_3ae1_4190_ad4b_44bf84ff02a3.slice - libcontainer container kubepods-burstable-pod38b14fc4_3ae1_4190_ad4b_44bf84ff02a3.slice. Sep 12 17:11:17.624853 systemd[1]: Created slice kubepods-burstable-podd13552a1_5417_4f25_b799_49c735e18819.slice - libcontainer container kubepods-burstable-podd13552a1_5417_4f25_b799_49c735e18819.slice. Sep 12 17:11:17.919428 containerd[2010]: time="2025-09-12T17:11:17.919367882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6zng,Uid:38b14fc4-3ae1-4190-ad4b-44bf84ff02a3,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:17.936821 containerd[2010]: time="2025-09-12T17:11:17.934864778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7lcm,Uid:d13552a1-5417-4f25-b799-49c735e18819,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:18.171793 kubelet[3431]: I0912 17:11:18.170443 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w2b9c" podStartSLOduration=8.486036603 podStartE2EDuration="18.170419739s" podCreationTimestamp="2025-09-12 17:11:00 +0000 UTC" firstStartedPulling="2025-09-12 17:11:02.815062175 +0000 UTC m=+9.191072435" lastFinishedPulling="2025-09-12 17:11:12.499445347 +0000 UTC m=+18.875455571" observedRunningTime="2025-09-12 17:11:18.167392859 +0000 UTC m=+24.543403083" watchObservedRunningTime="2025-09-12 17:11:18.170419739 +0000 UTC m=+24.546429963" Sep 12 17:11:20.355085 systemd-networkd[1932]: cilium_host: Link UP Sep 12 17:11:20.355430 systemd-networkd[1932]: cilium_net: Link UP Sep 12 17:11:20.355438 systemd-networkd[1932]: cilium_net: Gained carrier Sep 12 17:11:20.357316 systemd-networkd[1932]: cilium_host: Gained carrier Sep 12 17:11:20.359520 (udev-worker)[4223]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:20.360843 (udev-worker)[4225]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:20.532325 (udev-worker)[4269]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:20.542410 systemd-networkd[1932]: cilium_vxlan: Link UP Sep 12 17:11:20.542425 systemd-networkd[1932]: cilium_vxlan: Gained carrier Sep 12 17:11:20.845899 systemd-networkd[1932]: cilium_host: Gained IPv6LL Sep 12 17:11:20.909909 systemd-networkd[1932]: cilium_net: Gained IPv6LL Sep 12 17:11:21.108910 kernel: NET: Registered PF_ALG protocol family Sep 12 17:11:22.434632 systemd-networkd[1932]: lxc_health: Link UP Sep 12 17:11:22.463067 systemd-networkd[1932]: lxc_health: Gained carrier Sep 12 17:11:22.511042 systemd-networkd[1932]: cilium_vxlan: Gained IPv6LL Sep 12 17:11:23.044518 systemd-networkd[1932]: lxce2ccf6abf4cb: Link UP Sep 12 17:11:23.053689 kernel: eth0: renamed from tmp2e33f Sep 12 17:11:23.062981 systemd-networkd[1932]: lxce2ccf6abf4cb: Gained carrier Sep 12 17:11:23.075627 systemd-networkd[1932]: lxc074b0f101d80: Link UP Sep 12 17:11:23.095786 kernel: eth0: renamed from tmp59b72 Sep 12 17:11:23.101588 (udev-worker)[4274]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:23.108029 systemd-networkd[1932]: lxc074b0f101d80: Gained carrier Sep 12 17:11:23.598010 systemd-networkd[1932]: lxc_health: Gained IPv6LL Sep 12 17:11:24.942567 systemd-networkd[1932]: lxce2ccf6abf4cb: Gained IPv6LL Sep 12 17:11:24.943046 systemd-networkd[1932]: lxc074b0f101d80: Gained IPv6LL Sep 12 17:11:25.289813 kubelet[3431]: I0912 17:11:25.288825 3431 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:27.744846 ntpd[1987]: Listen normally on 7 cilium_host 192.168.0.242:123 Sep 12 17:11:27.744985 ntpd[1987]: Listen normally on 8 cilium_net [fe80::74e0:10ff:fe7a:ab76%4]:123 Sep 12 17:11:27.745737 ntpd[1987]: 12 Sep 17:11:27 ntpd[1987]: Listen normally on 7 cilium_host 192.168.0.242:123 Sep 12 17:11:27.745737 ntpd[1987]: 12 Sep 17:11:27 ntpd[1987]: Listen normally on 8 cilium_net [fe80::74e0:10ff:fe7a:ab76%4]:123 Sep 12 17:11:27.745737 ntpd[1987]: 12 Sep 17:11:27 ntpd[1987]: Listen normally on 9 cilium_host [fe80::840c:25ff:fe21:29e4%5]:123 Sep 12 17:11:27.745737 ntpd[1987]: 12 Sep 17:11:27 ntpd[1987]: Listen normally on 10 cilium_vxlan [fe80::34ca:bbff:feac:a5b1%6]:123 Sep 12 17:11:27.745737 ntpd[1987]: 12 Sep 17:11:27 ntpd[1987]: Listen normally on 11 lxc_health [fe80::f484:55ff:feec:bb49%8]:123 Sep 12 17:11:27.745737 ntpd[1987]: 12 Sep 17:11:27 ntpd[1987]: Listen normally on 12 lxce2ccf6abf4cb [fe80::c0ca:b2ff:fec5:d735%10]:123 Sep 12 17:11:27.745737 ntpd[1987]: 12 Sep 17:11:27 ntpd[1987]: Listen normally on 13 lxc074b0f101d80 [fe80::c88e:c4ff:fed9:4a01%12]:123 Sep 12 17:11:27.745068 ntpd[1987]: Listen normally on 9 cilium_host [fe80::840c:25ff:fe21:29e4%5]:123 Sep 12 17:11:27.745137 ntpd[1987]: Listen normally on 10 cilium_vxlan [fe80::34ca:bbff:feac:a5b1%6]:123 Sep 12 17:11:27.745206 ntpd[1987]: Listen normally on 11 lxc_health [fe80::f484:55ff:feec:bb49%8]:123 Sep 12 17:11:27.745274 ntpd[1987]: Listen normally on 12 lxce2ccf6abf4cb [fe80::c0ca:b2ff:fec5:d735%10]:123 Sep 12 17:11:27.745341 ntpd[1987]: Listen normally on 13 lxc074b0f101d80 [fe80::c88e:c4ff:fed9:4a01%12]:123 Sep 12 17:11:31.393846 containerd[2010]: time="2025-09-12T17:11:31.392784313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:31.394942 containerd[2010]: time="2025-09-12T17:11:31.394472233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:31.394942 containerd[2010]: time="2025-09-12T17:11:31.394618393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:31.395537 containerd[2010]: time="2025-09-12T17:11:31.395354173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:31.447440 systemd[1]: Started cri-containerd-59b72a645f21d90d3b3ca9a29a594bd196221d96148d538b9672257e6d591695.scope - libcontainer container 59b72a645f21d90d3b3ca9a29a594bd196221d96148d538b9672257e6d591695. Sep 12 17:11:31.486961 containerd[2010]: time="2025-09-12T17:11:31.486785678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:31.489707 containerd[2010]: time="2025-09-12T17:11:31.487793858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:31.489707 containerd[2010]: time="2025-09-12T17:11:31.488692658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:31.489707 containerd[2010]: time="2025-09-12T17:11:31.488895818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:31.547481 systemd[1]: Started cri-containerd-2e33f90fcced8235fbcc3a88310d1d4ca73378f60c71b7d2291f83199266f569.scope - libcontainer container 2e33f90fcced8235fbcc3a88310d1d4ca73378f60c71b7d2291f83199266f569. Sep 12 17:11:31.609043 containerd[2010]: time="2025-09-12T17:11:31.608644526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6zng,Uid:38b14fc4-3ae1-4190-ad4b-44bf84ff02a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"59b72a645f21d90d3b3ca9a29a594bd196221d96148d538b9672257e6d591695\"" Sep 12 17:11:31.620414 containerd[2010]: time="2025-09-12T17:11:31.620321006Z" level=info msg="CreateContainer within sandbox \"59b72a645f21d90d3b3ca9a29a594bd196221d96148d538b9672257e6d591695\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:31.644405 containerd[2010]: time="2025-09-12T17:11:31.644092226Z" level=info msg="CreateContainer within sandbox \"59b72a645f21d90d3b3ca9a29a594bd196221d96148d538b9672257e6d591695\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efafa6cde8b25cee4c4727e736bb39cf4554e095b45ab5229735b8f8d94db77b\"" Sep 12 17:11:31.648111 containerd[2010]: time="2025-09-12T17:11:31.645824654Z" level=info msg="StartContainer for \"efafa6cde8b25cee4c4727e736bb39cf4554e095b45ab5229735b8f8d94db77b\"" Sep 12 17:11:31.693819 containerd[2010]: time="2025-09-12T17:11:31.693731115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w7lcm,Uid:d13552a1-5417-4f25-b799-49c735e18819,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e33f90fcced8235fbcc3a88310d1d4ca73378f60c71b7d2291f83199266f569\"" Sep 12 17:11:31.706050 containerd[2010]: time="2025-09-12T17:11:31.705980367Z" level=info msg="CreateContainer within sandbox \"2e33f90fcced8235fbcc3a88310d1d4ca73378f60c71b7d2291f83199266f569\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:31.736180 systemd[1]: Started cri-containerd-efafa6cde8b25cee4c4727e736bb39cf4554e095b45ab5229735b8f8d94db77b.scope - libcontainer container efafa6cde8b25cee4c4727e736bb39cf4554e095b45ab5229735b8f8d94db77b. Sep 12 17:11:31.742535 containerd[2010]: time="2025-09-12T17:11:31.742454763Z" level=info msg="CreateContainer within sandbox \"2e33f90fcced8235fbcc3a88310d1d4ca73378f60c71b7d2291f83199266f569\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"653e9de607ed4b2f6188c14c4c29184733e59e14968fe81b930fea5f6d400827\"" Sep 12 17:11:31.744367 containerd[2010]: time="2025-09-12T17:11:31.744158499Z" level=info msg="StartContainer for \"653e9de607ed4b2f6188c14c4c29184733e59e14968fe81b930fea5f6d400827\"" Sep 12 17:11:31.828092 systemd[1]: Started cri-containerd-653e9de607ed4b2f6188c14c4c29184733e59e14968fe81b930fea5f6d400827.scope - libcontainer container 653e9de607ed4b2f6188c14c4c29184733e59e14968fe81b930fea5f6d400827. Sep 12 17:11:31.839696 containerd[2010]: time="2025-09-12T17:11:31.839621727Z" level=info msg="StartContainer for \"efafa6cde8b25cee4c4727e736bb39cf4554e095b45ab5229735b8f8d94db77b\" returns successfully" Sep 12 17:11:31.914611 containerd[2010]: time="2025-09-12T17:11:31.912578884Z" level=info msg="StartContainer for \"653e9de607ed4b2f6188c14c4c29184733e59e14968fe81b930fea5f6d400827\" returns successfully" Sep 12 17:11:32.221422 kubelet[3431]: I0912 17:11:32.221129 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w7lcm" podStartSLOduration=32.221108725 podStartE2EDuration="32.221108725s" podCreationTimestamp="2025-09-12 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:32.217785481 +0000 UTC m=+38.593795729" watchObservedRunningTime="2025-09-12 17:11:32.221108725 +0000 UTC m=+38.597118937" Sep 12 17:11:32.247435 kubelet[3431]: I0912 17:11:32.247335 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z6zng" podStartSLOduration=32.247309957 podStartE2EDuration="32.247309957s" podCreationTimestamp="2025-09-12 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:32.243757201 +0000 UTC m=+38.619767425" watchObservedRunningTime="2025-09-12 17:11:32.247309957 +0000 UTC m=+38.623320181" Sep 12 17:11:32.731176 systemd[1]: Started sshd@10-172.31.21.20:22-147.75.109.163:58264.service - OpenSSH per-connection server daemon (147.75.109.163:58264). Sep 12 17:11:32.905115 sshd[4805]: Accepted publickey for core from 147.75.109.163 port 58264 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:32.907131 sshd[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:32.916853 systemd-logind[1992]: New session 10 of user core. Sep 12 17:11:32.925018 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:11:33.190055 sshd[4805]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:33.196431 systemd-logind[1992]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:11:33.197228 systemd[1]: sshd@10-172.31.21.20:22-147.75.109.163:58264.service: Deactivated successfully. Sep 12 17:11:33.201057 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:11:33.206003 systemd-logind[1992]: Removed session 10. Sep 12 17:11:38.230361 systemd[1]: Started sshd@11-172.31.21.20:22-147.75.109.163:58268.service - OpenSSH per-connection server daemon (147.75.109.163:58268). Sep 12 17:11:38.402699 sshd[4821]: Accepted publickey for core from 147.75.109.163 port 58268 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:38.405382 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:38.413835 systemd-logind[1992]: New session 11 of user core. Sep 12 17:11:38.421935 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:11:38.659609 sshd[4821]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:38.665980 systemd[1]: sshd@11-172.31.21.20:22-147.75.109.163:58268.service: Deactivated successfully. Sep 12 17:11:38.670385 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:11:38.671896 systemd-logind[1992]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:11:38.675040 systemd-logind[1992]: Removed session 11. Sep 12 17:11:43.709857 systemd[1]: Started sshd@12-172.31.21.20:22-147.75.109.163:41318.service - OpenSSH per-connection server daemon (147.75.109.163:41318). Sep 12 17:11:43.883453 sshd[4834]: Accepted publickey for core from 147.75.109.163 port 41318 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:43.886720 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:43.895112 systemd-logind[1992]: New session 12 of user core. Sep 12 17:11:43.899915 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:11:44.141294 sshd[4834]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:44.146029 systemd-logind[1992]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:11:44.147033 systemd[1]: sshd@12-172.31.21.20:22-147.75.109.163:41318.service: Deactivated successfully. Sep 12 17:11:44.151011 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:11:44.154773 systemd-logind[1992]: Removed session 12. Sep 12 17:11:49.179203 systemd[1]: Started sshd@13-172.31.21.20:22-147.75.109.163:41330.service - OpenSSH per-connection server daemon (147.75.109.163:41330). Sep 12 17:11:49.362146 sshd[4848]: Accepted publickey for core from 147.75.109.163 port 41330 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:49.364785 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:49.373750 systemd-logind[1992]: New session 13 of user core. Sep 12 17:11:49.378920 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:11:49.616222 sshd[4848]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:49.623403 systemd[1]: sshd@13-172.31.21.20:22-147.75.109.163:41330.service: Deactivated successfully. Sep 12 17:11:49.627264 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:11:49.628500 systemd-logind[1992]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:11:49.630488 systemd-logind[1992]: Removed session 13. Sep 12 17:11:49.653190 systemd[1]: Started sshd@14-172.31.21.20:22-147.75.109.163:41338.service - OpenSSH per-connection server daemon (147.75.109.163:41338). Sep 12 17:11:49.840066 sshd[4862]: Accepted publickey for core from 147.75.109.163 port 41338 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:49.841781 sshd[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:49.850269 systemd-logind[1992]: New session 14 of user core. Sep 12 17:11:49.863912 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:11:50.192625 sshd[4862]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:50.202353 systemd[1]: sshd@14-172.31.21.20:22-147.75.109.163:41338.service: Deactivated successfully. Sep 12 17:11:50.207272 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:11:50.211260 systemd-logind[1992]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:11:50.236967 systemd[1]: Started sshd@15-172.31.21.20:22-147.75.109.163:50026.service - OpenSSH per-connection server daemon (147.75.109.163:50026). Sep 12 17:11:50.241874 systemd-logind[1992]: Removed session 14. Sep 12 17:11:50.413432 sshd[4873]: Accepted publickey for core from 147.75.109.163 port 50026 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:50.416854 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:50.425774 systemd-logind[1992]: New session 15 of user core. Sep 12 17:11:50.431939 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:11:50.667896 sshd[4873]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:50.674067 systemd[1]: sshd@15-172.31.21.20:22-147.75.109.163:50026.service: Deactivated successfully. Sep 12 17:11:50.678581 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:11:50.680439 systemd-logind[1992]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:11:50.683825 systemd-logind[1992]: Removed session 15. Sep 12 17:11:55.712203 systemd[1]: Started sshd@16-172.31.21.20:22-147.75.109.163:50028.service - OpenSSH per-connection server daemon (147.75.109.163:50028). Sep 12 17:11:55.884307 sshd[4887]: Accepted publickey for core from 147.75.109.163 port 50028 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:55.886983 sshd[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:55.895175 systemd-logind[1992]: New session 16 of user core. Sep 12 17:11:55.899948 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:11:56.135885 sshd[4887]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:56.142541 systemd[1]: sshd@16-172.31.21.20:22-147.75.109.163:50028.service: Deactivated successfully. Sep 12 17:11:56.146569 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:11:56.148447 systemd-logind[1992]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:11:56.150958 systemd-logind[1992]: Removed session 16. Sep 12 17:12:01.183190 systemd[1]: Started sshd@17-172.31.21.20:22-147.75.109.163:48536.service - OpenSSH per-connection server daemon (147.75.109.163:48536). Sep 12 17:12:01.353371 sshd[4900]: Accepted publickey for core from 147.75.109.163 port 48536 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:01.356371 sshd[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:01.364789 systemd-logind[1992]: New session 17 of user core. Sep 12 17:12:01.371941 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:12:01.609016 sshd[4900]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:01.616069 systemd[1]: sshd@17-172.31.21.20:22-147.75.109.163:48536.service: Deactivated successfully. Sep 12 17:12:01.620365 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:12:01.622003 systemd-logind[1992]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:12:01.624195 systemd-logind[1992]: Removed session 17. Sep 12 17:12:06.654320 systemd[1]: Started sshd@18-172.31.21.20:22-147.75.109.163:48538.service - OpenSSH per-connection server daemon (147.75.109.163:48538). Sep 12 17:12:06.821558 sshd[4915]: Accepted publickey for core from 147.75.109.163 port 48538 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:06.824796 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:06.832753 systemd-logind[1992]: New session 18 of user core. Sep 12 17:12:06.839957 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:12:07.081348 sshd[4915]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:07.088071 systemd[1]: sshd@18-172.31.21.20:22-147.75.109.163:48538.service: Deactivated successfully. Sep 12 17:12:07.091800 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:12:07.093236 systemd-logind[1992]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:12:07.095280 systemd-logind[1992]: Removed session 18. Sep 12 17:12:12.126909 systemd[1]: Started sshd@19-172.31.21.20:22-147.75.109.163:59366.service - OpenSSH per-connection server daemon (147.75.109.163:59366). Sep 12 17:12:12.295631 sshd[4930]: Accepted publickey for core from 147.75.109.163 port 59366 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:12.298234 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:12.306896 systemd-logind[1992]: New session 19 of user core. Sep 12 17:12:12.313924 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:12:12.551146 sshd[4930]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:12.558371 systemd[1]: sshd@19-172.31.21.20:22-147.75.109.163:59366.service: Deactivated successfully. Sep 12 17:12:12.562741 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:12:12.564129 systemd-logind[1992]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:12:12.566274 systemd-logind[1992]: Removed session 19. Sep 12 17:12:12.587253 systemd[1]: Started sshd@20-172.31.21.20:22-147.75.109.163:59374.service - OpenSSH per-connection server daemon (147.75.109.163:59374). Sep 12 17:12:12.771503 sshd[4942]: Accepted publickey for core from 147.75.109.163 port 59374 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:12.774259 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:12.782272 systemd-logind[1992]: New session 20 of user core. Sep 12 17:12:12.791935 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:12:13.107855 sshd[4942]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:13.114353 systemd[1]: sshd@20-172.31.21.20:22-147.75.109.163:59374.service: Deactivated successfully. Sep 12 17:12:13.117846 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:12:13.119948 systemd-logind[1992]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:12:13.121632 systemd-logind[1992]: Removed session 20. Sep 12 17:12:13.154149 systemd[1]: Started sshd@21-172.31.21.20:22-147.75.109.163:59376.service - OpenSSH per-connection server daemon (147.75.109.163:59376). Sep 12 17:12:13.321936 sshd[4952]: Accepted publickey for core from 147.75.109.163 port 59376 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:13.324922 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:13.334745 systemd-logind[1992]: New session 21 of user core. Sep 12 17:12:13.344793 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:12:14.383720 sshd[4952]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:14.395010 systemd[1]: sshd@21-172.31.21.20:22-147.75.109.163:59376.service: Deactivated successfully. Sep 12 17:12:14.402264 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:12:14.407738 systemd-logind[1992]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:12:14.442231 systemd[1]: Started sshd@22-172.31.21.20:22-147.75.109.163:59384.service - OpenSSH per-connection server daemon (147.75.109.163:59384). Sep 12 17:12:14.446861 systemd-logind[1992]: Removed session 21. Sep 12 17:12:14.651998 sshd[4968]: Accepted publickey for core from 147.75.109.163 port 59384 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:14.654998 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:14.663589 systemd-logind[1992]: New session 22 of user core. Sep 12 17:12:14.671943 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:12:15.155528 sshd[4968]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:15.162473 systemd[1]: sshd@22-172.31.21.20:22-147.75.109.163:59384.service: Deactivated successfully. Sep 12 17:12:15.166323 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:12:15.168412 systemd-logind[1992]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:12:15.170571 systemd-logind[1992]: Removed session 22. Sep 12 17:12:15.197351 systemd[1]: Started sshd@23-172.31.21.20:22-147.75.109.163:59388.service - OpenSSH per-connection server daemon (147.75.109.163:59388). Sep 12 17:12:15.367689 sshd[4980]: Accepted publickey for core from 147.75.109.163 port 59388 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:15.370871 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:15.378957 systemd-logind[1992]: New session 23 of user core. Sep 12 17:12:15.388948 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:12:15.626122 sshd[4980]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:15.631418 systemd[1]: sshd@23-172.31.21.20:22-147.75.109.163:59388.service: Deactivated successfully. Sep 12 17:12:15.635914 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:12:15.639276 systemd-logind[1992]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:12:15.641521 systemd-logind[1992]: Removed session 23. Sep 12 17:12:20.665203 systemd[1]: Started sshd@24-172.31.21.20:22-147.75.109.163:54252.service - OpenSSH per-connection server daemon (147.75.109.163:54252). Sep 12 17:12:20.844310 sshd[4994]: Accepted publickey for core from 147.75.109.163 port 54252 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:20.846944 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:20.855081 systemd-logind[1992]: New session 24 of user core. Sep 12 17:12:20.860956 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:12:21.101097 sshd[4994]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:21.106821 systemd[1]: sshd@24-172.31.21.20:22-147.75.109.163:54252.service: Deactivated successfully. Sep 12 17:12:21.109987 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:12:21.111300 systemd-logind[1992]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:12:21.113908 systemd-logind[1992]: Removed session 24. Sep 12 17:12:26.143285 systemd[1]: Started sshd@25-172.31.21.20:22-147.75.109.163:54254.service - OpenSSH per-connection server daemon (147.75.109.163:54254). Sep 12 17:12:26.313558 sshd[5010]: Accepted publickey for core from 147.75.109.163 port 54254 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:26.317346 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:26.325883 systemd-logind[1992]: New session 25 of user core. Sep 12 17:12:26.335370 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:12:26.570018 sshd[5010]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:26.576900 systemd-logind[1992]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:12:26.578318 systemd[1]: sshd@25-172.31.21.20:22-147.75.109.163:54254.service: Deactivated successfully. Sep 12 17:12:26.584357 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:12:26.587228 systemd-logind[1992]: Removed session 25. Sep 12 17:12:31.613151 systemd[1]: Started sshd@26-172.31.21.20:22-147.75.109.163:38444.service - OpenSSH per-connection server daemon (147.75.109.163:38444). Sep 12 17:12:31.790878 sshd[5023]: Accepted publickey for core from 147.75.109.163 port 38444 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:31.793539 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:31.801011 systemd-logind[1992]: New session 26 of user core. Sep 12 17:12:31.806918 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:12:32.044873 sshd[5023]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:32.051094 systemd[1]: sshd@26-172.31.21.20:22-147.75.109.163:38444.service: Deactivated successfully. Sep 12 17:12:32.056083 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:12:32.058788 systemd-logind[1992]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:12:32.063156 systemd-logind[1992]: Removed session 26. Sep 12 17:12:37.087178 systemd[1]: Started sshd@27-172.31.21.20:22-147.75.109.163:38454.service - OpenSSH per-connection server daemon (147.75.109.163:38454). Sep 12 17:12:37.262206 sshd[5038]: Accepted publickey for core from 147.75.109.163 port 38454 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:37.264845 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:37.272599 systemd-logind[1992]: New session 27 of user core. Sep 12 17:12:37.286033 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:12:37.521289 sshd[5038]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:37.529964 systemd[1]: sshd@27-172.31.21.20:22-147.75.109.163:38454.service: Deactivated successfully. Sep 12 17:12:37.535301 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:12:37.538968 systemd-logind[1992]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:12:37.541336 systemd-logind[1992]: Removed session 27. Sep 12 17:12:37.564183 systemd[1]: Started sshd@28-172.31.21.20:22-147.75.109.163:38470.service - OpenSSH per-connection server daemon (147.75.109.163:38470). Sep 12 17:12:37.733977 sshd[5051]: Accepted publickey for core from 147.75.109.163 port 38470 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:37.736877 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:37.746118 systemd-logind[1992]: New session 28 of user core. Sep 12 17:12:37.752965 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:12:41.038472 containerd[2010]: time="2025-09-12T17:12:41.036860947Z" level=info msg="StopContainer for \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\" with timeout 30 (s)" Sep 12 17:12:41.041284 containerd[2010]: time="2025-09-12T17:12:41.041163043Z" level=info msg="Stop container \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\" with signal terminated" Sep 12 17:12:41.072113 systemd[1]: cri-containerd-6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077.scope: Deactivated successfully. Sep 12 17:12:41.095983 containerd[2010]: time="2025-09-12T17:12:41.095899963Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:12:41.113395 containerd[2010]: time="2025-09-12T17:12:41.113279299Z" level=info msg="StopContainer for \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\" with timeout 2 (s)" Sep 12 17:12:41.115271 containerd[2010]: time="2025-09-12T17:12:41.115211419Z" level=info msg="Stop container \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\" with signal terminated" Sep 12 17:12:41.135756 systemd-networkd[1932]: lxc_health: Link DOWN Sep 12 17:12:41.135780 systemd-networkd[1932]: lxc_health: Lost carrier Sep 12 17:12:41.138189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077-rootfs.mount: Deactivated successfully. Sep 12 17:12:41.150620 containerd[2010]: time="2025-09-12T17:12:41.150089444Z" level=info msg="shim disconnected" id=6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077 namespace=k8s.io Sep 12 17:12:41.150620 containerd[2010]: time="2025-09-12T17:12:41.150218372Z" level=warning msg="cleaning up after shim disconnected" id=6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077 namespace=k8s.io Sep 12 17:12:41.150620 containerd[2010]: time="2025-09-12T17:12:41.150276596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:41.172778 systemd[1]: cri-containerd-fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544.scope: Deactivated successfully. Sep 12 17:12:41.173247 systemd[1]: cri-containerd-fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544.scope: Consumed 14.416s CPU time. Sep 12 17:12:41.202106 containerd[2010]: time="2025-09-12T17:12:41.201893144Z" level=info msg="StopContainer for \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\" returns successfully" Sep 12 17:12:41.203857 containerd[2010]: time="2025-09-12T17:12:41.203778464Z" level=info msg="StopPodSandbox for \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\"" Sep 12 17:12:41.204230 containerd[2010]: time="2025-09-12T17:12:41.204049316Z" level=info msg="Container to stop \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:12:41.208389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97-shm.mount: Deactivated successfully. Sep 12 17:12:41.228182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544-rootfs.mount: Deactivated successfully. Sep 12 17:12:41.232521 systemd[1]: cri-containerd-74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97.scope: Deactivated successfully. Sep 12 17:12:41.238284 containerd[2010]: time="2025-09-12T17:12:41.237826328Z" level=info msg="shim disconnected" id=fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544 namespace=k8s.io Sep 12 17:12:41.238284 containerd[2010]: time="2025-09-12T17:12:41.238035056Z" level=warning msg="cleaning up after shim disconnected" id=fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544 namespace=k8s.io Sep 12 17:12:41.238284 containerd[2010]: time="2025-09-12T17:12:41.238057916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:41.288893 containerd[2010]: time="2025-09-12T17:12:41.288608840Z" level=info msg="StopContainer for \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\" returns successfully" Sep 12 17:12:41.290800 containerd[2010]: time="2025-09-12T17:12:41.290717936Z" level=info msg="StopPodSandbox for \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\"" Sep 12 17:12:41.290985 containerd[2010]: time="2025-09-12T17:12:41.290800520Z" level=info msg="Container to stop \"a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:12:41.290985 containerd[2010]: time="2025-09-12T17:12:41.290833724Z" level=info msg="Container to stop \"c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:12:41.290985 containerd[2010]: time="2025-09-12T17:12:41.290858396Z" level=info msg="Container to stop \"f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:12:41.290985 containerd[2010]: time="2025-09-12T17:12:41.290882612Z" level=info msg="Container to stop \"3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:12:41.290985 containerd[2010]: time="2025-09-12T17:12:41.290906012Z" level=info msg="Container to stop \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:12:41.299976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb-shm.mount: Deactivated successfully. Sep 12 17:12:41.309106 systemd[1]: cri-containerd-16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb.scope: Deactivated successfully. Sep 12 17:12:41.321251 containerd[2010]: time="2025-09-12T17:12:41.320935772Z" level=info msg="shim disconnected" id=74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97 namespace=k8s.io Sep 12 17:12:41.321251 containerd[2010]: time="2025-09-12T17:12:41.321009752Z" level=warning msg="cleaning up after shim disconnected" id=74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97 namespace=k8s.io Sep 12 17:12:41.321251 containerd[2010]: time="2025-09-12T17:12:41.321030404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:41.361357 containerd[2010]: time="2025-09-12T17:12:41.361144761Z" level=info msg="TearDown network for sandbox \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" successfully" Sep 12 17:12:41.361357 containerd[2010]: time="2025-09-12T17:12:41.361193037Z" level=info msg="StopPodSandbox for \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" returns successfully" Sep 12 17:12:41.385987 containerd[2010]: time="2025-09-12T17:12:41.385628145Z" level=info msg="shim disconnected" id=16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb namespace=k8s.io Sep 12 17:12:41.385987 containerd[2010]: time="2025-09-12T17:12:41.385744917Z" level=warning msg="cleaning up after shim disconnected" id=16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb namespace=k8s.io Sep 12 17:12:41.385987 containerd[2010]: time="2025-09-12T17:12:41.385765761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:41.397307 kubelet[3431]: I0912 17:12:41.396139 3431 scope.go:117] "RemoveContainer" containerID="6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077" Sep 12 17:12:41.405210 containerd[2010]: time="2025-09-12T17:12:41.404870253Z" level=info msg="RemoveContainer for \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\"" Sep 12 17:12:41.413466 containerd[2010]: time="2025-09-12T17:12:41.413399745Z" level=info msg="RemoveContainer for \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\" returns successfully" Sep 12 17:12:41.414348 kubelet[3431]: I0912 17:12:41.413952 3431 scope.go:117] "RemoveContainer" containerID="6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077" Sep 12 17:12:41.415831 containerd[2010]: time="2025-09-12T17:12:41.415757433Z" level=error msg="ContainerStatus for \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\": not found" Sep 12 17:12:41.416156 kubelet[3431]: E0912 17:12:41.416039 3431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\": not found" containerID="6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077" Sep 12 17:12:41.416240 kubelet[3431]: I0912 17:12:41.416092 3431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077"} err="failed to get container status \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bddba0a74f5d41404ebe3f0f7a48b3b6970a022f78c42429de1b96fc2e7a077\": not found" Sep 12 17:12:41.431696 containerd[2010]: time="2025-09-12T17:12:41.431442513Z" level=info msg="TearDown network for sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" successfully" Sep 12 17:12:41.431696 containerd[2010]: time="2025-09-12T17:12:41.431525361Z" level=info msg="StopPodSandbox for \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" returns successfully" Sep 12 17:12:41.464708 kubelet[3431]: I0912 17:12:41.464476 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdc4x\" (UniqueName: \"kubernetes.io/projected/a4f1da62-a0ef-4269-be79-dbfb68c0d382-kube-api-access-qdc4x\") pod \"a4f1da62-a0ef-4269-be79-dbfb68c0d382\" (UID: \"a4f1da62-a0ef-4269-be79-dbfb68c0d382\") " Sep 12 17:12:41.464708 kubelet[3431]: I0912 17:12:41.464566 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4f1da62-a0ef-4269-be79-dbfb68c0d382-cilium-config-path\") pod \"a4f1da62-a0ef-4269-be79-dbfb68c0d382\" (UID: \"a4f1da62-a0ef-4269-be79-dbfb68c0d382\") " Sep 12 17:12:41.477165 kubelet[3431]: I0912 17:12:41.476643 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f1da62-a0ef-4269-be79-dbfb68c0d382-kube-api-access-qdc4x" (OuterVolumeSpecName: "kube-api-access-qdc4x") pod "a4f1da62-a0ef-4269-be79-dbfb68c0d382" (UID: "a4f1da62-a0ef-4269-be79-dbfb68c0d382"). InnerVolumeSpecName "kube-api-access-qdc4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:12:41.477671 kubelet[3431]: I0912 17:12:41.477597 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f1da62-a0ef-4269-be79-dbfb68c0d382-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a4f1da62-a0ef-4269-be79-dbfb68c0d382" (UID: "a4f1da62-a0ef-4269-be79-dbfb68c0d382"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:12:41.566220 kubelet[3431]: I0912 17:12:41.564953 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cni-path\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566220 kubelet[3431]: I0912 17:12:41.565026 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-config-path\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566220 kubelet[3431]: I0912 17:12:41.565061 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-kernel\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566220 kubelet[3431]: I0912 17:12:41.565102 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-etc-cni-netd\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566220 kubelet[3431]: I0912 17:12:41.565138 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-xtables-lock\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566220 kubelet[3431]: I0912 17:12:41.565173 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-net\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566875 kubelet[3431]: I0912 17:12:41.565207 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-lib-modules\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566875 kubelet[3431]: I0912 17:12:41.565247 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd95a4b9-bd1d-4c82-b815-853f0badd776-clustermesh-secrets\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566875 kubelet[3431]: I0912 17:12:41.565287 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-cgroup\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566875 kubelet[3431]: I0912 17:12:41.565320 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-run\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566875 kubelet[3431]: I0912 17:12:41.565358 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-hubble-tls\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.566875 kubelet[3431]: I0912 17:12:41.565391 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-bpf-maps\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.567188 kubelet[3431]: I0912 17:12:41.565425 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-hostproc\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.567188 kubelet[3431]: I0912 17:12:41.565462 3431 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hx68\" (UniqueName: \"kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-kube-api-access-8hx68\") pod \"fd95a4b9-bd1d-4c82-b815-853f0badd776\" (UID: \"fd95a4b9-bd1d-4c82-b815-853f0badd776\") " Sep 12 17:12:41.567188 kubelet[3431]: I0912 17:12:41.565529 3431 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4f1da62-a0ef-4269-be79-dbfb68c0d382-cilium-config-path\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.567188 kubelet[3431]: I0912 17:12:41.565557 3431 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdc4x\" (UniqueName: \"kubernetes.io/projected/a4f1da62-a0ef-4269-be79-dbfb68c0d382-kube-api-access-qdc4x\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.568027 kubelet[3431]: I0912 17:12:41.567976 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.572684 kubelet[3431]: I0912 17:12:41.569740 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cni-path" (OuterVolumeSpecName: "cni-path") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.574752 kubelet[3431]: I0912 17:12:41.572481 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.574911 kubelet[3431]: I0912 17:12:41.572519 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.575051 kubelet[3431]: I0912 17:12:41.572545 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.575252 kubelet[3431]: I0912 17:12:41.572570 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.575252 kubelet[3431]: I0912 17:12:41.573097 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.575252 kubelet[3431]: I0912 17:12:41.573156 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.575252 kubelet[3431]: I0912 17:12:41.574309 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.575252 kubelet[3431]: I0912 17:12:41.574346 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-hostproc" (OuterVolumeSpecName: "hostproc") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:12:41.576769 kubelet[3431]: I0912 17:12:41.576381 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-kube-api-access-8hx68" (OuterVolumeSpecName: "kube-api-access-8hx68") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "kube-api-access-8hx68". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:12:41.577992 kubelet[3431]: I0912 17:12:41.577949 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:12:41.582105 kubelet[3431]: I0912 17:12:41.582005 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:12:41.582387 kubelet[3431]: I0912 17:12:41.582357 3431 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd95a4b9-bd1d-4c82-b815-853f0badd776-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fd95a4b9-bd1d-4c82-b815-853f0badd776" (UID: "fd95a4b9-bd1d-4c82-b815-853f0badd776"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:12:41.666480 kubelet[3431]: I0912 17:12:41.666437 3431 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd95a4b9-bd1d-4c82-b815-853f0badd776-clustermesh-secrets\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.666691 kubelet[3431]: I0912 17:12:41.666640 3431 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-cgroup\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666808 3431 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-run\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666837 3431 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-hubble-tls\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666858 3431 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-bpf-maps\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666882 3431 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8hx68\" (UniqueName: \"kubernetes.io/projected/fd95a4b9-bd1d-4c82-b815-853f0badd776-kube-api-access-8hx68\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666904 3431 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-hostproc\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666928 3431 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd95a4b9-bd1d-4c82-b815-853f0badd776-cilium-config-path\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666951 3431 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-kernel\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667081 kubelet[3431]: I0912 17:12:41.666971 3431 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-cni-path\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667591 kubelet[3431]: I0912 17:12:41.666991 3431 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-etc-cni-netd\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667591 kubelet[3431]: I0912 17:12:41.667011 3431 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-xtables-lock\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667591 kubelet[3431]: I0912 17:12:41.667032 3431 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-host-proc-sys-net\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.667591 kubelet[3431]: I0912 17:12:41.667053 3431 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd95a4b9-bd1d-4c82-b815-853f0badd776-lib-modules\") on node \"ip-172-31-21-20\" DevicePath \"\"" Sep 12 17:12:41.705937 systemd[1]: Removed slice kubepods-besteffort-poda4f1da62_a0ef_4269_be79_dbfb68c0d382.slice - libcontainer container kubepods-besteffort-poda4f1da62_a0ef_4269_be79_dbfb68c0d382.slice. Sep 12 17:12:41.941333 kubelet[3431]: I0912 17:12:41.938131 3431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f1da62-a0ef-4269-be79-dbfb68c0d382" path="/var/lib/kubelet/pods/a4f1da62-a0ef-4269-be79-dbfb68c0d382/volumes" Sep 12 17:12:41.949811 systemd[1]: Removed slice kubepods-burstable-podfd95a4b9_bd1d_4c82_b815_853f0badd776.slice - libcontainer container kubepods-burstable-podfd95a4b9_bd1d_4c82_b815_853f0badd776.slice. Sep 12 17:12:41.950380 systemd[1]: kubepods-burstable-podfd95a4b9_bd1d_4c82_b815_853f0badd776.slice: Consumed 14.574s CPU time. Sep 12 17:12:42.060769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb-rootfs.mount: Deactivated successfully. Sep 12 17:12:42.060959 systemd[1]: var-lib-kubelet-pods-fd95a4b9\x2dbd1d\x2d4c82\x2db815\x2d853f0badd776-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hx68.mount: Deactivated successfully. Sep 12 17:12:42.061104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97-rootfs.mount: Deactivated successfully. Sep 12 17:12:42.061236 systemd[1]: var-lib-kubelet-pods-a4f1da62\x2da0ef\x2d4269\x2dbe79\x2ddbfb68c0d382-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqdc4x.mount: Deactivated successfully. Sep 12 17:12:42.061378 systemd[1]: var-lib-kubelet-pods-fd95a4b9\x2dbd1d\x2d4c82\x2db815\x2d853f0badd776-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:12:42.061518 systemd[1]: var-lib-kubelet-pods-fd95a4b9\x2dbd1d\x2d4c82\x2db815\x2d853f0badd776-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:12:42.404836 kubelet[3431]: I0912 17:12:42.404802 3431 scope.go:117] "RemoveContainer" containerID="fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544" Sep 12 17:12:42.412299 containerd[2010]: time="2025-09-12T17:12:42.412123678Z" level=info msg="RemoveContainer for \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\"" Sep 12 17:12:42.419187 containerd[2010]: time="2025-09-12T17:12:42.419124610Z" level=info msg="RemoveContainer for \"fe769e720b8d4590cba58b221bc10fddd93f9b3801ff5cddc0c40d2e7b81d544\" returns successfully" Sep 12 17:12:42.420566 kubelet[3431]: I0912 17:12:42.420385 3431 scope.go:117] "RemoveContainer" containerID="c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080" Sep 12 17:12:42.424873 containerd[2010]: time="2025-09-12T17:12:42.423445510Z" level=info msg="RemoveContainer for \"c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080\"" Sep 12 17:12:42.427471 containerd[2010]: time="2025-09-12T17:12:42.427372870Z" level=info msg="RemoveContainer for \"c4b9e3839d1896158b9ff5a2ca6e4f35c1c895ac48ec91fff6501bb080770080\" returns successfully" Sep 12 17:12:42.428248 kubelet[3431]: I0912 17:12:42.427789 3431 scope.go:117] "RemoveContainer" containerID="a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9" Sep 12 17:12:42.430456 containerd[2010]: time="2025-09-12T17:12:42.430404934Z" level=info msg="RemoveContainer for \"a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9\"" Sep 12 17:12:42.437334 containerd[2010]: time="2025-09-12T17:12:42.437258650Z" level=info msg="RemoveContainer for \"a32951e6a89e957a38ee0e92c0f5b02c5f3a5e75b9a30912d08e389216beaab9\" returns successfully" Sep 12 17:12:42.440277 kubelet[3431]: I0912 17:12:42.439147 3431 scope.go:117] "RemoveContainer" containerID="3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c" Sep 12 17:12:42.442302 containerd[2010]: time="2025-09-12T17:12:42.442242670Z" level=info msg="RemoveContainer for \"3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c\"" Sep 12 17:12:42.449002 containerd[2010]: time="2025-09-12T17:12:42.448933222Z" level=info msg="RemoveContainer for \"3cadadf2a357ff37e64940999843cf89e7c411d122edcd8a8481d13c8ba0bc2c\" returns successfully" Sep 12 17:12:42.449738 kubelet[3431]: I0912 17:12:42.449638 3431 scope.go:117] "RemoveContainer" containerID="f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf" Sep 12 17:12:42.452285 containerd[2010]: time="2025-09-12T17:12:42.452230786Z" level=info msg="RemoveContainer for \"f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf\"" Sep 12 17:12:42.457349 containerd[2010]: time="2025-09-12T17:12:42.457162258Z" level=info msg="RemoveContainer for \"f3fb432848775ff44fe46cf7d2cf1aaf69a9b8b890d8f3040401646e09a6d1cf\" returns successfully" Sep 12 17:12:42.962007 sshd[5051]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:42.966897 systemd[1]: sshd@28-172.31.21.20:22-147.75.109.163:38470.service: Deactivated successfully. Sep 12 17:12:42.970386 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:12:42.972738 systemd[1]: session-28.scope: Consumed 2.505s CPU time. Sep 12 17:12:42.975476 systemd-logind[1992]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:12:42.977865 systemd-logind[1992]: Removed session 28. Sep 12 17:12:43.000225 systemd[1]: Started sshd@29-172.31.21.20:22-147.75.109.163:46670.service - OpenSSH per-connection server daemon (147.75.109.163:46670). Sep 12 17:12:43.175430 sshd[5213]: Accepted publickey for core from 147.75.109.163 port 46670 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:43.178506 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:43.185915 systemd-logind[1992]: New session 29 of user core. Sep 12 17:12:43.197919 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:12:43.745544 ntpd[1987]: Deleting interface #11 lxc_health, fe80::f484:55ff:feec:bb49%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Sep 12 17:12:43.746789 ntpd[1987]: 12 Sep 17:12:43 ntpd[1987]: Deleting interface #11 lxc_health, fe80::f484:55ff:feec:bb49%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Sep 12 17:12:43.935690 kubelet[3431]: I0912 17:12:43.934979 3431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd95a4b9-bd1d-4c82-b815-853f0badd776" path="/var/lib/kubelet/pods/fd95a4b9-bd1d-4c82-b815-853f0badd776/volumes" Sep 12 17:12:44.154037 kubelet[3431]: E0912 17:12:44.153965 3431 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:12:44.587027 sshd[5213]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:44.601582 systemd[1]: sshd@29-172.31.21.20:22-147.75.109.163:46670.service: Deactivated successfully. Sep 12 17:12:44.608083 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:12:44.608403 systemd[1]: session-29.scope: Consumed 1.166s CPU time. Sep 12 17:12:44.612103 systemd-logind[1992]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:12:44.640541 systemd[1]: Started sshd@30-172.31.21.20:22-147.75.109.163:46678.service - OpenSSH per-connection server daemon (147.75.109.163:46678). Sep 12 17:12:44.644772 kubelet[3431]: I0912 17:12:44.641598 3431 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd95a4b9-bd1d-4c82-b815-853f0badd776" containerName="cilium-agent" Sep 12 17:12:44.644772 kubelet[3431]: I0912 17:12:44.641632 3431 memory_manager.go:355] "RemoveStaleState removing state" podUID="a4f1da62-a0ef-4269-be79-dbfb68c0d382" containerName="cilium-operator" Sep 12 17:12:44.642446 systemd-logind[1992]: Removed session 29. Sep 12 17:12:44.698268 systemd[1]: Created slice kubepods-burstable-pod947fa14e_2278_4676_a0e0_433136fb76a1.slice - libcontainer container kubepods-burstable-pod947fa14e_2278_4676_a0e0_433136fb76a1.slice. Sep 12 17:12:44.790195 kubelet[3431]: I0912 17:12:44.790081 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-bpf-maps\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790195 kubelet[3431]: I0912 17:12:44.790153 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-host-proc-sys-net\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790195 kubelet[3431]: I0912 17:12:44.790200 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-hostproc\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790535 kubelet[3431]: I0912 17:12:44.790237 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-lib-modules\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790535 kubelet[3431]: I0912 17:12:44.790272 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/947fa14e-2278-4676-a0e0-433136fb76a1-cilium-ipsec-secrets\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790535 kubelet[3431]: I0912 17:12:44.790307 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-host-proc-sys-kernel\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790535 kubelet[3431]: I0912 17:12:44.790348 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-etc-cni-netd\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790535 kubelet[3431]: I0912 17:12:44.790385 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-cilium-cgroup\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790847 kubelet[3431]: I0912 17:12:44.790422 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/947fa14e-2278-4676-a0e0-433136fb76a1-cilium-config-path\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790847 kubelet[3431]: I0912 17:12:44.790455 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/947fa14e-2278-4676-a0e0-433136fb76a1-hubble-tls\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790847 kubelet[3431]: I0912 17:12:44.790492 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cglvg\" (UniqueName: \"kubernetes.io/projected/947fa14e-2278-4676-a0e0-433136fb76a1-kube-api-access-cglvg\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790847 kubelet[3431]: I0912 17:12:44.790529 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-cilium-run\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790847 kubelet[3431]: I0912 17:12:44.790561 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/947fa14e-2278-4676-a0e0-433136fb76a1-clustermesh-secrets\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.790847 kubelet[3431]: I0912 17:12:44.790604 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-xtables-lock\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.791153 kubelet[3431]: I0912 17:12:44.790640 3431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/947fa14e-2278-4676-a0e0-433136fb76a1-cni-path\") pod \"cilium-r4xkg\" (UID: \"947fa14e-2278-4676-a0e0-433136fb76a1\") " pod="kube-system/cilium-r4xkg" Sep 12 17:12:44.864919 sshd[5225]: Accepted publickey for core from 147.75.109.163 port 46678 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:44.868565 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:44.877222 systemd-logind[1992]: New session 30 of user core. Sep 12 17:12:44.883929 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 17:12:44.935108 kubelet[3431]: E0912 17:12:44.932703 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-w7lcm" podUID="d13552a1-5417-4f25-b799-49c735e18819" Sep 12 17:12:45.003348 sshd[5225]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:45.008115 systemd[1]: sshd@30-172.31.21.20:22-147.75.109.163:46678.service: Deactivated successfully. Sep 12 17:12:45.012539 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 17:12:45.015532 systemd-logind[1992]: Session 30 logged out. Waiting for processes to exit. Sep 12 17:12:45.018376 systemd-logind[1992]: Removed session 30. Sep 12 17:12:45.032598 containerd[2010]: time="2025-09-12T17:12:45.032509859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4xkg,Uid:947fa14e-2278-4676-a0e0-433136fb76a1,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:45.049482 systemd[1]: Started sshd@31-172.31.21.20:22-147.75.109.163:46692.service - OpenSSH per-connection server daemon (147.75.109.163:46692). Sep 12 17:12:45.082682 containerd[2010]: time="2025-09-12T17:12:45.080578055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:45.082682 containerd[2010]: time="2025-09-12T17:12:45.080738663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:45.082682 containerd[2010]: time="2025-09-12T17:12:45.080791955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:45.082682 containerd[2010]: time="2025-09-12T17:12:45.081331055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:45.122987 systemd[1]: Started cri-containerd-109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70.scope - libcontainer container 109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70. Sep 12 17:12:45.176251 containerd[2010]: time="2025-09-12T17:12:45.176160720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4xkg,Uid:947fa14e-2278-4676-a0e0-433136fb76a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\"" Sep 12 17:12:45.183710 containerd[2010]: time="2025-09-12T17:12:45.183607404Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:12:45.200838 containerd[2010]: time="2025-09-12T17:12:45.200756328Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926\"" Sep 12 17:12:45.201892 containerd[2010]: time="2025-09-12T17:12:45.201801348Z" level=info msg="StartContainer for \"e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926\"" Sep 12 17:12:45.247343 systemd[1]: Started cri-containerd-e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926.scope - libcontainer container e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926. Sep 12 17:12:45.256206 sshd[5237]: Accepted publickey for core from 147.75.109.163 port 46692 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:45.259161 sshd[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:45.277081 systemd-logind[1992]: New session 31 of user core. Sep 12 17:12:45.286910 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 12 17:12:45.331932 containerd[2010]: time="2025-09-12T17:12:45.331484124Z" level=info msg="StartContainer for \"e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926\" returns successfully" Sep 12 17:12:45.354418 systemd[1]: cri-containerd-e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926.scope: Deactivated successfully. Sep 12 17:12:45.409255 containerd[2010]: time="2025-09-12T17:12:45.408832765Z" level=info msg="shim disconnected" id=e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926 namespace=k8s.io Sep 12 17:12:45.409255 containerd[2010]: time="2025-09-12T17:12:45.408943933Z" level=warning msg="cleaning up after shim disconnected" id=e57ffbbfb016e138bb9dca5603a4c10ef57122aba403921b65d726a856ca4926 namespace=k8s.io Sep 12 17:12:45.409255 containerd[2010]: time="2025-09-12T17:12:45.409000465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:45.457937 containerd[2010]: time="2025-09-12T17:12:45.457438285Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:12:45.487954 containerd[2010]: time="2025-09-12T17:12:45.487858057Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f\"" Sep 12 17:12:45.489973 containerd[2010]: time="2025-09-12T17:12:45.489704137Z" level=info msg="StartContainer for \"2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f\"" Sep 12 17:12:45.574889 systemd[1]: Started cri-containerd-2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f.scope - libcontainer container 2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f. Sep 12 17:12:45.699806 containerd[2010]: time="2025-09-12T17:12:45.698600786Z" level=info msg="StartContainer for \"2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f\" returns successfully" Sep 12 17:12:45.743122 systemd[1]: cri-containerd-2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f.scope: Deactivated successfully. Sep 12 17:12:45.785860 containerd[2010]: time="2025-09-12T17:12:45.785501439Z" level=info msg="shim disconnected" id=2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f namespace=k8s.io Sep 12 17:12:45.785860 containerd[2010]: time="2025-09-12T17:12:45.785588619Z" level=warning msg="cleaning up after shim disconnected" id=2e0f1de2a1e349ae734eb49e2e540cbcc9532e02b2fff5edb15ba2efa71e8f0f namespace=k8s.io Sep 12 17:12:45.785860 containerd[2010]: time="2025-09-12T17:12:45.785610015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:46.459369 containerd[2010]: time="2025-09-12T17:12:46.458314574Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:12:46.499193 containerd[2010]: time="2025-09-12T17:12:46.499091354Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160\"" Sep 12 17:12:46.500157 containerd[2010]: time="2025-09-12T17:12:46.500105378Z" level=info msg="StartContainer for \"780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160\"" Sep 12 17:12:46.561988 systemd[1]: Started cri-containerd-780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160.scope - libcontainer container 780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160. Sep 12 17:12:46.621421 containerd[2010]: time="2025-09-12T17:12:46.620794623Z" level=info msg="StartContainer for \"780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160\" returns successfully" Sep 12 17:12:46.625326 systemd[1]: cri-containerd-780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160.scope: Deactivated successfully. Sep 12 17:12:46.692054 containerd[2010]: time="2025-09-12T17:12:46.691919895Z" level=info msg="shim disconnected" id=780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160 namespace=k8s.io Sep 12 17:12:46.692877 containerd[2010]: time="2025-09-12T17:12:46.692784123Z" level=warning msg="cleaning up after shim disconnected" id=780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160 namespace=k8s.io Sep 12 17:12:46.692877 containerd[2010]: time="2025-09-12T17:12:46.692919291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:46.902358 systemd[1]: run-containerd-runc-k8s.io-780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160-runc.ZCDYEY.mount: Deactivated successfully. Sep 12 17:12:46.902525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-780484696aad18f604711fcfc4bcb3b2410c450832e1308d9d5b2db8dd64d160-rootfs.mount: Deactivated successfully. Sep 12 17:12:46.930980 kubelet[3431]: E0912 17:12:46.930812 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-w7lcm" podUID="d13552a1-5417-4f25-b799-49c735e18819" Sep 12 17:12:46.930980 kubelet[3431]: E0912 17:12:46.930914 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-z6zng" podUID="38b14fc4-3ae1-4190-ad4b-44bf84ff02a3" Sep 12 17:12:47.183604 kubelet[3431]: I0912 17:12:47.181485 3431 setters.go:602] "Node became not ready" node="ip-172-31-21-20" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:12:47Z","lastTransitionTime":"2025-09-12T17:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:12:47.466105 containerd[2010]: time="2025-09-12T17:12:47.465854559Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:12:47.496245 containerd[2010]: time="2025-09-12T17:12:47.495936195Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359\"" Sep 12 17:12:47.497881 containerd[2010]: time="2025-09-12T17:12:47.496996695Z" level=info msg="StartContainer for \"35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359\"" Sep 12 17:12:47.562985 systemd[1]: Started cri-containerd-35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359.scope - libcontainer container 35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359. Sep 12 17:12:47.605388 systemd[1]: cri-containerd-35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359.scope: Deactivated successfully. Sep 12 17:12:47.610303 containerd[2010]: time="2025-09-12T17:12:47.610047484Z" level=info msg="StartContainer for \"35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359\" returns successfully" Sep 12 17:12:47.659484 containerd[2010]: time="2025-09-12T17:12:47.659359624Z" level=info msg="shim disconnected" id=35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359 namespace=k8s.io Sep 12 17:12:47.661021 containerd[2010]: time="2025-09-12T17:12:47.660721180Z" level=warning msg="cleaning up after shim disconnected" id=35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359 namespace=k8s.io Sep 12 17:12:47.661021 containerd[2010]: time="2025-09-12T17:12:47.660768556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:47.902407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35baab9054752bec60a031ab7cae4e1594ca761bbf2eda7e4867877b39720359-rootfs.mount: Deactivated successfully. Sep 12 17:12:48.472786 containerd[2010]: time="2025-09-12T17:12:48.472709248Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:12:48.511376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810082623.mount: Deactivated successfully. Sep 12 17:12:48.518415 containerd[2010]: time="2025-09-12T17:12:48.518317744Z" level=info msg="CreateContainer within sandbox \"109057877a997b861d8ca023c73025872bf207f0d7b381e0e5fa4c9424744c70\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8939d8fea85c29d8bd6df0c52ce1d125b358bd64fe9f03cb9eaa130f9decf197\"" Sep 12 17:12:48.521319 containerd[2010]: time="2025-09-12T17:12:48.521242792Z" level=info msg="StartContainer for \"8939d8fea85c29d8bd6df0c52ce1d125b358bd64fe9f03cb9eaa130f9decf197\"" Sep 12 17:12:48.576987 systemd[1]: Started cri-containerd-8939d8fea85c29d8bd6df0c52ce1d125b358bd64fe9f03cb9eaa130f9decf197.scope - libcontainer container 8939d8fea85c29d8bd6df0c52ce1d125b358bd64fe9f03cb9eaa130f9decf197. Sep 12 17:12:48.636722 containerd[2010]: time="2025-09-12T17:12:48.636632261Z" level=info msg="StartContainer for \"8939d8fea85c29d8bd6df0c52ce1d125b358bd64fe9f03cb9eaa130f9decf197\" returns successfully" Sep 12 17:12:48.930994 kubelet[3431]: E0912 17:12:48.930799 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-z6zng" podUID="38b14fc4-3ae1-4190-ad4b-44bf84ff02a3" Sep 12 17:12:48.930994 kubelet[3431]: E0912 17:12:48.930934 3431 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-w7lcm" podUID="d13552a1-5417-4f25-b799-49c735e18819" Sep 12 17:12:49.441701 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:12:49.511406 kubelet[3431]: I0912 17:12:49.511304 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r4xkg" podStartSLOduration=5.511262933 podStartE2EDuration="5.511262933s" podCreationTimestamp="2025-09-12 17:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:49.510227165 +0000 UTC m=+115.886237401" watchObservedRunningTime="2025-09-12 17:12:49.511262933 +0000 UTC m=+115.887273157" Sep 12 17:12:49.950032 kubelet[3431]: E0912 17:12:49.949589 3431 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35734->127.0.0.1:41615: write tcp 127.0.0.1:35734->127.0.0.1:41615: write: connection reset by peer Sep 12 17:12:52.126579 systemd[1]: run-containerd-runc-k8s.io-8939d8fea85c29d8bd6df0c52ce1d125b358bd64fe9f03cb9eaa130f9decf197-runc.UF4LZC.mount: Deactivated successfully. Sep 12 17:12:53.663718 systemd-networkd[1932]: lxc_health: Link UP Sep 12 17:12:53.672091 systemd-networkd[1932]: lxc_health: Gained carrier Sep 12 17:12:53.679952 (udev-worker)[6083]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:12:53.922299 containerd[2010]: time="2025-09-12T17:12:53.922111043Z" level=info msg="StopPodSandbox for \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\"" Sep 12 17:12:53.922299 containerd[2010]: time="2025-09-12T17:12:53.922257611Z" level=info msg="TearDown network for sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" successfully" Sep 12 17:12:53.924363 containerd[2010]: time="2025-09-12T17:12:53.923045663Z" level=info msg="StopPodSandbox for \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" returns successfully" Sep 12 17:12:53.926734 containerd[2010]: time="2025-09-12T17:12:53.925993439Z" level=info msg="RemovePodSandbox for \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\"" Sep 12 17:12:53.926734 containerd[2010]: time="2025-09-12T17:12:53.926065727Z" level=info msg="Forcibly stopping sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\"" Sep 12 17:12:53.926734 containerd[2010]: time="2025-09-12T17:12:53.926172131Z" level=info msg="TearDown network for sandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" successfully" Sep 12 17:12:53.940564 containerd[2010]: time="2025-09-12T17:12:53.940495511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:12:53.940812 containerd[2010]: time="2025-09-12T17:12:53.940601423Z" level=info msg="RemovePodSandbox \"16ff01a140949922da85be5955916ce86ee8cd0e240f48641db20471938910fb\" returns successfully" Sep 12 17:12:53.943279 containerd[2010]: time="2025-09-12T17:12:53.943205399Z" level=info msg="StopPodSandbox for \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\"" Sep 12 17:12:53.943446 containerd[2010]: time="2025-09-12T17:12:53.943374887Z" level=info msg="TearDown network for sandbox \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" successfully" Sep 12 17:12:53.943446 containerd[2010]: time="2025-09-12T17:12:53.943400183Z" level=info msg="StopPodSandbox for \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" returns successfully" Sep 12 17:12:53.944979 containerd[2010]: time="2025-09-12T17:12:53.944903591Z" level=info msg="RemovePodSandbox for \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\"" Sep 12 17:12:53.944979 containerd[2010]: time="2025-09-12T17:12:53.944972651Z" level=info msg="Forcibly stopping sandbox \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\"" Sep 12 17:12:53.945304 containerd[2010]: time="2025-09-12T17:12:53.945081455Z" level=info msg="TearDown network for sandbox \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" successfully" Sep 12 17:12:53.953767 containerd[2010]: time="2025-09-12T17:12:53.953647511Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:12:53.953934 containerd[2010]: time="2025-09-12T17:12:53.953775695Z" level=info msg="RemovePodSandbox \"74cd62a4e09ebef748cca984b0667e41e7533a27bcc1bcd65ef1cb91f5fe5f97\" returns successfully" Sep 12 17:12:55.439851 systemd-networkd[1932]: lxc_health: Gained IPv6LL Sep 12 17:12:56.686793 systemd[1]: run-containerd-runc-k8s.io-8939d8fea85c29d8bd6df0c52ce1d125b358bd64fe9f03cb9eaa130f9decf197-runc.G46R1P.mount: Deactivated successfully. Sep 12 17:12:56.831085 kubelet[3431]: E0912 17:12:56.830991 3431 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35766->127.0.0.1:41615: write tcp 127.0.0.1:35766->127.0.0.1:41615: write: broken pipe Sep 12 17:12:57.744782 ntpd[1987]: Listen normally on 14 lxc_health [fe80::c420:70ff:fed9:228c%14]:123 Sep 12 17:12:57.745448 ntpd[1987]: 12 Sep 17:12:57 ntpd[1987]: Listen normally on 14 lxc_health [fe80::c420:70ff:fed9:228c%14]:123 Sep 12 17:12:59.112383 sshd[5237]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:59.120239 systemd[1]: sshd@31-172.31.21.20:22-147.75.109.163:46692.service: Deactivated successfully. Sep 12 17:12:59.123961 systemd[1]: session-31.scope: Deactivated successfully. Sep 12 17:12:59.126736 systemd-logind[1992]: Session 31 logged out. Waiting for processes to exit. Sep 12 17:12:59.130106 systemd-logind[1992]: Removed session 31. Sep 12 17:13:13.605835 systemd[1]: cri-containerd-6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804.scope: Deactivated successfully. Sep 12 17:13:13.606302 systemd[1]: cri-containerd-6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804.scope: Consumed 5.214s CPU time, 20.1M memory peak, 0B memory swap peak. Sep 12 17:13:13.648351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804-rootfs.mount: Deactivated successfully. Sep 12 17:13:13.668247 containerd[2010]: time="2025-09-12T17:13:13.668175221Z" level=info msg="shim disconnected" id=6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804 namespace=k8s.io Sep 12 17:13:13.669330 containerd[2010]: time="2025-09-12T17:13:13.668720573Z" level=warning msg="cleaning up after shim disconnected" id=6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804 namespace=k8s.io Sep 12 17:13:13.669330 containerd[2010]: time="2025-09-12T17:13:13.668762921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:14.548155 kubelet[3431]: I0912 17:13:14.548089 3431 scope.go:117] "RemoveContainer" containerID="6251e6d418c9087110b91364fe42df4e37fbc780afa7304ef0315b109f4e0804" Sep 12 17:13:14.552157 containerd[2010]: time="2025-09-12T17:13:14.551955497Z" level=info msg="CreateContainer within sandbox \"f6a5d068996ef51677e8fd35df3f62efae49be01d15332c39987e9e17acf16c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:13:14.577444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183122622.mount: Deactivated successfully. Sep 12 17:13:14.583698 containerd[2010]: time="2025-09-12T17:13:14.583506606Z" level=info msg="CreateContainer within sandbox \"f6a5d068996ef51677e8fd35df3f62efae49be01d15332c39987e9e17acf16c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"368c063a095c28543dd3596f7c26e4998703b60dbe2be04ae7dba8db57b8fdfb\"" Sep 12 17:13:14.584617 containerd[2010]: time="2025-09-12T17:13:14.584383446Z" level=info msg="StartContainer for \"368c063a095c28543dd3596f7c26e4998703b60dbe2be04ae7dba8db57b8fdfb\"" Sep 12 17:13:14.642003 systemd[1]: Started cri-containerd-368c063a095c28543dd3596f7c26e4998703b60dbe2be04ae7dba8db57b8fdfb.scope - libcontainer container 368c063a095c28543dd3596f7c26e4998703b60dbe2be04ae7dba8db57b8fdfb. Sep 12 17:13:14.709958 containerd[2010]: time="2025-09-12T17:13:14.709611282Z" level=info msg="StartContainer for \"368c063a095c28543dd3596f7c26e4998703b60dbe2be04ae7dba8db57b8fdfb\" returns successfully" Sep 12 17:13:17.229097 kubelet[3431]: E0912 17:13:17.228760 3431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-20?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 17:13:18.056173 systemd[1]: cri-containerd-818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233.scope: Deactivated successfully. Sep 12 17:13:18.057646 systemd[1]: cri-containerd-818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233.scope: Consumed 3.418s CPU time, 16.2M memory peak, 0B memory swap peak. Sep 12 17:13:18.096892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233-rootfs.mount: Deactivated successfully. Sep 12 17:13:18.112081 containerd[2010]: time="2025-09-12T17:13:18.111994255Z" level=info msg="shim disconnected" id=818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233 namespace=k8s.io Sep 12 17:13:18.112081 containerd[2010]: time="2025-09-12T17:13:18.112074151Z" level=warning msg="cleaning up after shim disconnected" id=818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233 namespace=k8s.io Sep 12 17:13:18.113058 containerd[2010]: time="2025-09-12T17:13:18.112096543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:18.564521 kubelet[3431]: I0912 17:13:18.564460 3431 scope.go:117] "RemoveContainer" containerID="818cf7bfdc7a5d692a8490707713474584465235a177f1e7befbaadb5fc22233" Sep 12 17:13:18.567994 containerd[2010]: time="2025-09-12T17:13:18.567824121Z" level=info msg="CreateContainer within sandbox \"85e2826a43a2c2a7fe0078ee5ce668dc5ac491c233990aaddad74bf4da51bc13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:13:18.594494 containerd[2010]: time="2025-09-12T17:13:18.594413062Z" level=info msg="CreateContainer within sandbox \"85e2826a43a2c2a7fe0078ee5ce668dc5ac491c233990aaddad74bf4da51bc13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"993e60e238df4ccade888f306f94c3b2df615cefd2753efa4209da9bdb6827f3\"" Sep 12 17:13:18.595185 containerd[2010]: time="2025-09-12T17:13:18.595121578Z" level=info msg="StartContainer for \"993e60e238df4ccade888f306f94c3b2df615cefd2753efa4209da9bdb6827f3\"" Sep 12 17:13:18.644964 systemd[1]: Started cri-containerd-993e60e238df4ccade888f306f94c3b2df615cefd2753efa4209da9bdb6827f3.scope - libcontainer container 993e60e238df4ccade888f306f94c3b2df615cefd2753efa4209da9bdb6827f3. Sep 12 17:13:18.716025 containerd[2010]: time="2025-09-12T17:13:18.715950742Z" level=info msg="StartContainer for \"993e60e238df4ccade888f306f94c3b2df615cefd2753efa4209da9bdb6827f3\" returns successfully"