Sep 3 23:23:00.153587 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 3 23:23:00.153640 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:23:00.153666 kernel: KASLR disabled due to lack of seed Sep 3 23:23:00.153683 kernel: efi: EFI v2.7 by EDK II Sep 3 23:23:00.153699 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 3 23:23:00.153714 kernel: secureboot: Secure boot disabled Sep 3 23:23:00.153731 kernel: ACPI: Early table checksum verification disabled Sep 3 23:23:00.153746 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 3 23:23:00.153761 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 3 23:23:00.153776 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 3 23:23:00.153791 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 3 23:23:00.153810 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 3 23:23:00.153825 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 3 23:23:00.153841 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 3 23:23:00.153860 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 3 23:23:00.153876 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 3 23:23:00.153896 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 3 23:23:00.153913 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 3 23:23:00.153929 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 3 23:23:00.153945 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 3 23:23:00.153961 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 3 23:23:00.153977 kernel: printk: legacy bootconsole [uart0] enabled Sep 3 23:23:00.153994 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:23:00.154010 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 3 23:23:00.154027 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 3 23:23:00.154045 kernel: Zone ranges: Sep 3 23:23:00.154064 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 3 23:23:00.154111 kernel: DMA32 empty Sep 3 23:23:00.154138 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 3 23:23:00.154177 kernel: Device empty Sep 3 23:23:00.154216 kernel: Movable zone start for each node Sep 3 23:23:00.154251 kernel: Early memory node ranges Sep 3 23:23:00.154271 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 3 23:23:00.154290 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 3 23:23:00.154306 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 3 23:23:00.154322 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 3 23:23:00.154338 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 3 23:23:00.154353 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 3 23:23:00.154370 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 3 23:23:00.154392 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 3 23:23:00.154415 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 3 23:23:00.154432 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 3 23:23:00.154449 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 3 23:23:00.154465 kernel: psci: probing for conduit method from ACPI. Sep 3 23:23:00.157764 kernel: psci: PSCIv1.0 detected in firmware. Sep 3 23:23:00.157791 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:23:00.157808 kernel: psci: Trusted OS migration not required Sep 3 23:23:00.157824 kernel: psci: SMC Calling Convention v1.1 Sep 3 23:23:00.157841 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 3 23:23:00.157858 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:23:00.157874 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:23:00.157891 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 3 23:23:00.157908 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:23:00.157924 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:23:00.157941 kernel: CPU features: detected: Spectre-v2 Sep 3 23:23:00.157964 kernel: CPU features: detected: Spectre-v3a Sep 3 23:23:00.157981 kernel: CPU features: detected: Spectre-BHB Sep 3 23:23:00.157998 kernel: CPU features: detected: ARM erratum 1742098 Sep 3 23:23:00.158014 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 3 23:23:00.158031 kernel: alternatives: applying boot alternatives Sep 3 23:23:00.158049 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:23:00.158067 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:23:00.158084 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:23:00.158101 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:23:00.158117 kernel: Fallback order for Node 0: 0 Sep 3 23:23:00.158138 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 3 23:23:00.158154 kernel: Policy zone: Normal Sep 3 23:23:00.158171 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:23:00.158187 kernel: software IO TLB: area num 2. Sep 3 23:23:00.158204 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Sep 3 23:23:00.158220 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 3 23:23:00.158237 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:23:00.158254 kernel: rcu: RCU event tracing is enabled. Sep 3 23:23:00.158271 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 3 23:23:00.158288 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:23:00.158305 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:23:00.158323 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:23:00.158343 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 3 23:23:00.158360 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:23:00.158378 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:23:00.158394 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:23:00.158410 kernel: GICv3: 96 SPIs implemented Sep 3 23:23:00.158427 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:23:00.158443 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:23:00.158460 kernel: GICv3: GICv3 features: 16 PPIs Sep 3 23:23:00.158515 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 3 23:23:00.158538 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 3 23:23:00.158556 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 3 23:23:00.158573 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 3 23:23:00.158596 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 3 23:23:00.158613 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 3 23:23:00.158629 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 3 23:23:00.158646 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 3 23:23:00.158662 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:23:00.158678 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 3 23:23:00.158695 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 3 23:23:00.158712 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 3 23:23:00.158728 kernel: Console: colour dummy device 80x25 Sep 3 23:23:00.158746 kernel: printk: legacy console [tty1] enabled Sep 3 23:23:00.158763 kernel: ACPI: Core revision 20240827 Sep 3 23:23:00.158784 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 3 23:23:00.158801 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:23:00.158818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:23:00.158835 kernel: landlock: Up and running. Sep 3 23:23:00.158852 kernel: SELinux: Initializing. Sep 3 23:23:00.158869 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:23:00.158886 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:23:00.158902 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:23:00.158920 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:23:00.158941 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:23:00.158958 kernel: Remapping and enabling EFI services. Sep 3 23:23:00.158975 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:23:00.158992 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:23:00.159009 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 3 23:23:00.159026 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 3 23:23:00.159043 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 3 23:23:00.159059 kernel: smp: Brought up 1 node, 2 CPUs Sep 3 23:23:00.159076 kernel: SMP: Total of 2 processors activated. Sep 3 23:23:00.159106 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:23:00.159124 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:23:00.159145 kernel: CPU features: detected: 32-bit EL1 Support Sep 3 23:23:00.159163 kernel: CPU features: detected: CRC32 instructions Sep 3 23:23:00.159180 kernel: alternatives: applying system-wide alternatives Sep 3 23:23:00.159198 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Sep 3 23:23:00.159217 kernel: devtmpfs: initialized Sep 3 23:23:00.159238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:23:00.159257 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 3 23:23:00.159274 kernel: 17040 pages in range for non-PLT usage Sep 3 23:23:00.159292 kernel: 508560 pages in range for PLT usage Sep 3 23:23:00.159310 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:23:00.159328 kernel: SMBIOS 3.0.0 present. Sep 3 23:23:00.159347 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 3 23:23:00.159365 kernel: DMI: Memory slots populated: 0/0 Sep 3 23:23:00.159383 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:23:00.159405 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:23:00.159424 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:23:00.159459 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:23:00.161892 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:23:00.161917 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Sep 3 23:23:00.161936 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:23:00.161954 kernel: cpuidle: using governor menu Sep 3 23:23:00.161971 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:23:00.161989 kernel: ASID allocator initialised with 65536 entries Sep 3 23:23:00.162016 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:23:00.162034 kernel: Serial: AMBA PL011 UART driver Sep 3 23:23:00.162051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:23:00.162069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:23:00.162086 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:23:00.162104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:23:00.162122 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:23:00.162141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:23:00.162159 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:23:00.162182 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:23:00.162200 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:23:00.162218 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:23:00.162235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:23:00.162253 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:23:00.162271 kernel: ACPI: Interpreter enabled Sep 3 23:23:00.162289 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:23:00.162307 kernel: ACPI: MCFG table detected, 1 entries Sep 3 23:23:00.162325 kernel: ACPI: CPU0 has been hot-added Sep 3 23:23:00.162346 kernel: ACPI: CPU1 has been hot-added Sep 3 23:23:00.162364 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 3 23:23:00.162723 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:23:00.162949 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 3 23:23:00.163152 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 3 23:23:00.163348 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 3 23:23:00.163612 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 3 23:23:00.163647 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 3 23:23:00.163667 kernel: acpiphp: Slot [1] registered Sep 3 23:23:00.163685 kernel: acpiphp: Slot [2] registered Sep 3 23:23:00.163703 kernel: acpiphp: Slot [3] registered Sep 3 23:23:00.163721 kernel: acpiphp: Slot [4] registered Sep 3 23:23:00.163738 kernel: acpiphp: Slot [5] registered Sep 3 23:23:00.163756 kernel: acpiphp: Slot [6] registered Sep 3 23:23:00.163774 kernel: acpiphp: Slot [7] registered Sep 3 23:23:00.163792 kernel: acpiphp: Slot [8] registered Sep 3 23:23:00.163811 kernel: acpiphp: Slot [9] registered Sep 3 23:23:00.163835 kernel: acpiphp: Slot [10] registered Sep 3 23:23:00.163855 kernel: acpiphp: Slot [11] registered Sep 3 23:23:00.163873 kernel: acpiphp: Slot [12] registered Sep 3 23:23:00.163892 kernel: acpiphp: Slot [13] registered Sep 3 23:23:00.163910 kernel: acpiphp: Slot [14] registered Sep 3 23:23:00.163929 kernel: acpiphp: Slot [15] registered Sep 3 23:23:00.163948 kernel: acpiphp: Slot [16] registered Sep 3 23:23:00.163967 kernel: acpiphp: Slot [17] registered Sep 3 23:23:00.163986 kernel: acpiphp: Slot [18] registered Sep 3 23:23:00.164009 kernel: acpiphp: Slot [19] registered Sep 3 23:23:00.164028 kernel: acpiphp: Slot [20] registered Sep 3 23:23:00.164047 kernel: acpiphp: Slot [21] registered Sep 3 23:23:00.164066 kernel: acpiphp: Slot [22] registered Sep 3 23:23:00.164084 kernel: acpiphp: Slot [23] registered Sep 3 23:23:00.164103 kernel: acpiphp: Slot [24] registered Sep 3 23:23:00.164121 kernel: acpiphp: Slot [25] registered Sep 3 23:23:00.164140 kernel: acpiphp: Slot [26] registered Sep 3 23:23:00.164159 kernel: acpiphp: Slot [27] registered Sep 3 23:23:00.164178 kernel: acpiphp: Slot [28] registered Sep 3 23:23:00.164202 kernel: acpiphp: Slot [29] registered Sep 3 23:23:00.164221 kernel: acpiphp: Slot [30] registered Sep 3 23:23:00.164239 kernel: acpiphp: Slot [31] registered Sep 3 23:23:00.164257 kernel: PCI host bridge to bus 0000:00 Sep 3 23:23:00.170241 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 3 23:23:00.170565 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 3 23:23:00.170779 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 3 23:23:00.170968 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 3 23:23:00.171199 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:23:00.171430 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 3 23:23:00.171694 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 3 23:23:00.171914 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 3 23:23:00.172113 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 3 23:23:00.172310 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 3 23:23:00.172616 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 3 23:23:00.172827 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 3 23:23:00.173055 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 3 23:23:00.173254 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 3 23:23:00.173452 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 3 23:23:00.173684 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 3 23:23:00.173882 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 3 23:23:00.174087 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 3 23:23:00.174283 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 3 23:23:00.176529 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 3 23:23:00.176775 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 3 23:23:00.176978 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 3 23:23:00.177161 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 3 23:23:00.177196 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 3 23:23:00.177216 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 3 23:23:00.177235 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 3 23:23:00.177253 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 3 23:23:00.177271 kernel: iommu: Default domain type: Translated Sep 3 23:23:00.177289 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:23:00.177308 kernel: efivars: Registered efivars operations Sep 3 23:23:00.177327 kernel: vgaarb: loaded Sep 3 23:23:00.177346 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:23:00.177364 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:23:00.177386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:23:00.177404 kernel: pnp: PnP ACPI init Sep 3 23:23:00.180692 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 3 23:23:00.180734 kernel: pnp: PnP ACPI: found 1 devices Sep 3 23:23:00.180752 kernel: NET: Registered PF_INET protocol family Sep 3 23:23:00.180771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:23:00.180790 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:23:00.180809 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:23:00.180837 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:23:00.180856 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:23:00.180874 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:23:00.180892 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:23:00.180910 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:23:00.180949 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:23:00.180970 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:23:00.180989 kernel: kvm [1]: HYP mode not available Sep 3 23:23:00.181006 kernel: Initialise system trusted keyrings Sep 3 23:23:00.181030 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:23:00.181049 kernel: Key type asymmetric registered Sep 3 23:23:00.181066 kernel: Asymmetric key parser 'x509' registered Sep 3 23:23:00.181084 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:23:00.181103 kernel: io scheduler mq-deadline registered Sep 3 23:23:00.181121 kernel: io scheduler kyber registered Sep 3 23:23:00.181139 kernel: io scheduler bfq registered Sep 3 23:23:00.181394 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 3 23:23:00.181431 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 3 23:23:00.181449 kernel: ACPI: button: Power Button [PWRB] Sep 3 23:23:00.181467 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 3 23:23:00.181568 kernel: ACPI: button: Sleep Button [SLPB] Sep 3 23:23:00.181589 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:23:00.181609 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 3 23:23:00.181824 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 3 23:23:00.181851 kernel: printk: legacy console [ttyS0] disabled Sep 3 23:23:00.181869 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 3 23:23:00.181894 kernel: printk: legacy console [ttyS0] enabled Sep 3 23:23:00.181913 kernel: printk: legacy bootconsole [uart0] disabled Sep 3 23:23:00.181931 kernel: thunder_xcv, ver 1.0 Sep 3 23:23:00.181949 kernel: thunder_bgx, ver 1.0 Sep 3 23:23:00.181967 kernel: nicpf, ver 1.0 Sep 3 23:23:00.181984 kernel: nicvf, ver 1.0 Sep 3 23:23:00.182192 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:23:00.182388 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:22:59 UTC (1756941779) Sep 3 23:23:00.182421 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:23:00.182441 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 3 23:23:00.182460 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:23:00.183465 kernel: watchdog: NMI not fully supported Sep 3 23:23:00.183524 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:23:00.183543 kernel: Segment Routing with IPv6 Sep 3 23:23:00.183561 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:23:00.183578 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:23:00.183596 kernel: Key type dns_resolver registered Sep 3 23:23:00.183623 kernel: registered taskstats version 1 Sep 3 23:23:00.183641 kernel: Loading compiled-in X.509 certificates Sep 3 23:23:00.183659 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:23:00.183677 kernel: Demotion targets for Node 0: null Sep 3 23:23:00.183694 kernel: Key type .fscrypt registered Sep 3 23:23:00.183712 kernel: Key type fscrypt-provisioning registered Sep 3 23:23:00.183729 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:23:00.183747 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:23:00.183765 kernel: ima: No architecture policies found Sep 3 23:23:00.183786 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:23:00.183805 kernel: clk: Disabling unused clocks Sep 3 23:23:00.183823 kernel: PM: genpd: Disabling unused power domains Sep 3 23:23:00.183840 kernel: Warning: unable to open an initial console. Sep 3 23:23:00.183858 kernel: Freeing unused kernel memory: 38976K Sep 3 23:23:00.183876 kernel: Run /init as init process Sep 3 23:23:00.183893 kernel: with arguments: Sep 3 23:23:00.183911 kernel: /init Sep 3 23:23:00.183928 kernel: with environment: Sep 3 23:23:00.183945 kernel: HOME=/ Sep 3 23:23:00.183967 kernel: TERM=linux Sep 3 23:23:00.183985 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:23:00.184005 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:23:00.184029 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:23:00.184050 systemd[1]: Detected virtualization amazon. Sep 3 23:23:00.184069 systemd[1]: Detected architecture arm64. Sep 3 23:23:00.184088 systemd[1]: Running in initrd. Sep 3 23:23:00.184111 systemd[1]: No hostname configured, using default hostname. Sep 3 23:23:00.184131 systemd[1]: Hostname set to . Sep 3 23:23:00.184150 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:23:00.184169 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:23:00.184189 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:00.184208 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:00.184229 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:23:00.184248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:23:00.184272 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:23:00.184293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:23:00.184315 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:23:00.184336 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:23:00.184357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:00.184377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:00.184397 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:23:00.184421 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:23:00.184440 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:23:00.184459 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:23:00.184500 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:23:00.184526 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:23:00.185580 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:23:00.185618 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:23:00.185639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:00.185667 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:00.185687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:00.185707 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:23:00.185727 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:23:00.185746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:23:00.185765 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:23:00.185786 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:23:00.185806 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:23:00.185825 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:23:00.185848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:23:00.185868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:00.185887 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:23:00.185908 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:00.185933 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:23:00.185954 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:23:00.185974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:00.185994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:23:00.186062 systemd-journald[257]: Collecting audit messages is disabled. Sep 3 23:23:00.186125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:23:00.186151 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:23:00.186171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:23:00.186191 kernel: Bridge firewalling registered Sep 3 23:23:00.186211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:00.186237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:00.186258 systemd-journald[257]: Journal started Sep 3 23:23:00.186294 systemd-journald[257]: Runtime Journal (/run/log/journal/ec2ded1bd257c4678d31690343c26cd6) is 8M, max 75.3M, 67.3M free. Sep 3 23:23:00.105130 systemd-modules-load[259]: Inserted module 'overlay' Sep 3 23:23:00.162894 systemd-modules-load[259]: Inserted module 'br_netfilter' Sep 3 23:23:00.193670 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:23:00.211915 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:23:00.218001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:00.232393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:23:00.240969 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:23:00.256801 systemd-tmpfiles[287]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:23:00.272464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:00.278903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:00.290107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:23:00.307953 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:23:00.388648 systemd-resolved[303]: Positive Trust Anchors: Sep 3 23:23:00.388676 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:23:00.388739 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:23:00.490518 kernel: SCSI subsystem initialized Sep 3 23:23:00.498519 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:23:00.511692 kernel: iscsi: registered transport (tcp) Sep 3 23:23:00.533210 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:23:00.533282 kernel: QLogic iSCSI HBA Driver Sep 3 23:23:00.567678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:23:00.593232 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:00.609075 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:23:00.677752 kernel: random: crng init done Sep 3 23:23:00.678057 systemd-resolved[303]: Defaulting to hostname 'linux'. Sep 3 23:23:00.683995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:23:00.693606 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:00.705420 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:23:00.714696 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:23:00.798529 kernel: raid6: neonx8 gen() 6458 MB/s Sep 3 23:23:00.815524 kernel: raid6: neonx4 gen() 6442 MB/s Sep 3 23:23:00.832522 kernel: raid6: neonx2 gen() 5359 MB/s Sep 3 23:23:00.849533 kernel: raid6: neonx1 gen() 3918 MB/s Sep 3 23:23:00.866523 kernel: raid6: int64x8 gen() 3634 MB/s Sep 3 23:23:00.883528 kernel: raid6: int64x4 gen() 3687 MB/s Sep 3 23:23:00.900519 kernel: raid6: int64x2 gen() 3559 MB/s Sep 3 23:23:00.918518 kernel: raid6: int64x1 gen() 2770 MB/s Sep 3 23:23:00.918567 kernel: raid6: using algorithm neonx8 gen() 6458 MB/s Sep 3 23:23:00.937522 kernel: raid6: .... xor() 4751 MB/s, rmw enabled Sep 3 23:23:00.937582 kernel: raid6: using neon recovery algorithm Sep 3 23:23:00.946202 kernel: xor: measuring software checksum speed Sep 3 23:23:00.946264 kernel: 8regs : 12954 MB/sec Sep 3 23:23:00.947372 kernel: 32regs : 13049 MB/sec Sep 3 23:23:00.949756 kernel: arm64_neon : 8819 MB/sec Sep 3 23:23:00.949802 kernel: xor: using function: 32regs (13049 MB/sec) Sep 3 23:23:01.041529 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:23:01.054550 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:23:01.062200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:01.113283 systemd-udevd[506]: Using default interface naming scheme 'v255'. Sep 3 23:23:01.123460 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:01.139223 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:23:01.183873 dracut-pre-trigger[516]: rd.md=0: removing MD RAID activation Sep 3 23:23:01.228675 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:23:01.235656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:23:01.373498 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:01.383911 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:23:01.537155 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 3 23:23:01.537220 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 3 23:23:01.546512 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 3 23:23:01.546831 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 3 23:23:01.557549 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b6:31:03:bb:25 Sep 3 23:23:01.560602 (udev-worker)[554]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:23:01.579096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:01.586120 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 3 23:23:01.579352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:01.590221 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:01.600324 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 3 23:23:01.596647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:01.609141 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:01.620512 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 3 23:23:01.636526 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:23:01.636602 kernel: GPT:9289727 != 16777215 Sep 3 23:23:01.636628 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:23:01.638497 kernel: GPT:9289727 != 16777215 Sep 3 23:23:01.638538 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:23:01.639510 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:23:01.648671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:01.689524 kernel: nvme nvme0: using unchecked data buffer Sep 3 23:23:01.799943 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 3 23:23:01.858783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 3 23:23:01.871276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 3 23:23:01.895581 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:23:01.923631 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 3 23:23:01.966178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 3 23:23:01.972126 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:23:01.975506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:01.984522 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:23:01.990584 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:23:02.001713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:23:02.031192 disk-uuid[688]: Primary Header is updated. Sep 3 23:23:02.031192 disk-uuid[688]: Secondary Entries is updated. Sep 3 23:23:02.031192 disk-uuid[688]: Secondary Header is updated. Sep 3 23:23:02.043092 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:23:02.065549 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:23:03.088884 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:23:03.094134 disk-uuid[693]: The operation has completed successfully. Sep 3 23:23:03.304144 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:23:03.304728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:23:03.390798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:23:03.414057 sh[871]: Success Sep 3 23:23:03.444190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:23:03.444267 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:23:03.446265 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:23:03.459522 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:23:03.574894 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:23:03.588721 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:23:03.614758 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:23:03.638927 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (894) Sep 3 23:23:03.643620 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:23:03.643754 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:03.674432 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 3 23:23:03.674542 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:23:03.674571 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:23:03.692685 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:23:03.698574 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:23:03.704737 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:23:03.706114 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:23:03.718846 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:23:03.774548 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (927) Sep 3 23:23:03.780191 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:03.780265 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:03.800300 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:23:03.800376 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:23:03.809561 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:03.812947 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:23:03.821881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:23:03.936246 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:23:03.949771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:23:04.047206 systemd-networkd[1065]: lo: Link UP Sep 3 23:23:04.049822 systemd-networkd[1065]: lo: Gained carrier Sep 3 23:23:04.056294 systemd-networkd[1065]: Enumeration completed Sep 3 23:23:04.057918 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:23:04.060211 systemd-networkd[1065]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:04.060218 systemd-networkd[1065]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:23:04.076935 systemd[1]: Reached target network.target - Network. Sep 3 23:23:04.083285 systemd-networkd[1065]: eth0: Link UP Sep 3 23:23:04.083301 systemd-networkd[1065]: eth0: Gained carrier Sep 3 23:23:04.083326 systemd-networkd[1065]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:04.101607 systemd-networkd[1065]: eth0: DHCPv4 address 172.31.18.182/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 3 23:23:04.321307 ignition[990]: Ignition 2.21.0 Sep 3 23:23:04.321333 ignition[990]: Stage: fetch-offline Sep 3 23:23:04.322286 ignition[990]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:04.322312 ignition[990]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:04.324958 ignition[990]: Ignition finished successfully Sep 3 23:23:04.336934 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:04.342335 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 3 23:23:04.404778 ignition[1075]: Ignition 2.21.0 Sep 3 23:23:04.404810 ignition[1075]: Stage: fetch Sep 3 23:23:04.406802 ignition[1075]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:04.406828 ignition[1075]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:04.410489 ignition[1075]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:04.426292 ignition[1075]: PUT result: OK Sep 3 23:23:04.435465 ignition[1075]: parsed url from cmdline: "" Sep 3 23:23:04.435660 ignition[1075]: no config URL provided Sep 3 23:23:04.435870 ignition[1075]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:23:04.435901 ignition[1075]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:23:04.435956 ignition[1075]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:04.448797 ignition[1075]: PUT result: OK Sep 3 23:23:04.449139 ignition[1075]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 3 23:23:04.454933 ignition[1075]: GET result: OK Sep 3 23:23:04.455388 ignition[1075]: parsing config with SHA512: 769b36206e32b37c467f53a8ff4f4c799618b2d151d21ebdc1636c606b9b0841273014f022398af075646cf184287d9a134fa2893446f9728397a2c823fa7d69 Sep 3 23:23:04.465394 unknown[1075]: fetched base config from "system" Sep 3 23:23:04.465415 unknown[1075]: fetched base config from "system" Sep 3 23:23:04.466373 ignition[1075]: fetch: fetch complete Sep 3 23:23:04.465428 unknown[1075]: fetched user config from "aws" Sep 3 23:23:04.466386 ignition[1075]: fetch: fetch passed Sep 3 23:23:04.466508 ignition[1075]: Ignition finished successfully Sep 3 23:23:04.484717 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 3 23:23:04.494757 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:23:04.547566 ignition[1082]: Ignition 2.21.0 Sep 3 23:23:04.548197 ignition[1082]: Stage: kargs Sep 3 23:23:04.548878 ignition[1082]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:04.548926 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:04.549913 ignition[1082]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:04.561155 ignition[1082]: PUT result: OK Sep 3 23:23:04.573592 ignition[1082]: kargs: kargs passed Sep 3 23:23:04.573954 ignition[1082]: Ignition finished successfully Sep 3 23:23:04.584010 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:23:04.587979 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:23:04.630096 ignition[1089]: Ignition 2.21.0 Sep 3 23:23:04.630138 ignition[1089]: Stage: disks Sep 3 23:23:04.630817 ignition[1089]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:04.630846 ignition[1089]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:04.631022 ignition[1089]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:04.638766 ignition[1089]: PUT result: OK Sep 3 23:23:04.657348 ignition[1089]: disks: disks passed Sep 3 23:23:04.657793 ignition[1089]: Ignition finished successfully Sep 3 23:23:04.664723 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:23:04.674520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:04.678509 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:23:04.690188 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:04.692933 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:23:04.696604 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:23:04.703240 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:23:04.776388 systemd-fsck[1098]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 3 23:23:04.780602 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:23:04.789958 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:23:04.937550 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:23:04.938213 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:23:04.943222 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:04.952675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:04.957616 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:23:04.961733 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:23:04.961836 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:23:04.961887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:04.993026 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:23:05.000623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:23:05.014537 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1117) Sep 3 23:23:05.019594 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:05.019653 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:05.030762 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:23:05.030890 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:23:05.033958 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:05.336337 initrd-setup-root[1142]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:23:05.347329 initrd-setup-root[1149]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:23:05.356545 initrd-setup-root[1156]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:23:05.364625 initrd-setup-root[1163]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:23:05.585548 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:05.593213 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:23:05.601081 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:23:05.640742 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:23:05.648517 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:05.676930 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:23:05.697296 ignition[1231]: INFO : Ignition 2.21.0 Sep 3 23:23:05.697296 ignition[1231]: INFO : Stage: mount Sep 3 23:23:05.703407 ignition[1231]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:05.703407 ignition[1231]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:05.703407 ignition[1231]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:05.717291 ignition[1231]: INFO : PUT result: OK Sep 3 23:23:05.727516 ignition[1231]: INFO : mount: mount passed Sep 3 23:23:05.729449 ignition[1231]: INFO : Ignition finished successfully Sep 3 23:23:05.735664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:23:05.740083 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:23:05.831720 systemd-networkd[1065]: eth0: Gained IPv6LL Sep 3 23:23:05.941632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:05.997534 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1243) Sep 3 23:23:06.002487 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:06.002565 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:06.010691 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:23:06.010813 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:23:06.014741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:06.062686 ignition[1260]: INFO : Ignition 2.21.0 Sep 3 23:23:06.062686 ignition[1260]: INFO : Stage: files Sep 3 23:23:06.067747 ignition[1260]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:06.067747 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:06.067747 ignition[1260]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:06.067747 ignition[1260]: INFO : PUT result: OK Sep 3 23:23:06.083778 ignition[1260]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:23:06.089646 ignition[1260]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:23:06.089646 ignition[1260]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:23:06.100798 ignition[1260]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:23:06.105286 ignition[1260]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:23:06.109213 unknown[1260]: wrote ssh authorized keys file for user: core Sep 3 23:23:06.112059 ignition[1260]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:23:06.116034 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:23:06.116034 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:06.184319 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:23:06.546017 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:23:06.546017 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:06.546017 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:06.743431 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 3 23:23:06.867176 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:06.867176 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:06.876205 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:06.926265 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:06.926265 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:06.926265 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 3 23:23:07.309789 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 3 23:23:07.649105 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:23:07.649105 ignition[1260]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:07.658447 ignition[1260]: INFO : files: files passed Sep 3 23:23:07.658447 ignition[1260]: INFO : Ignition finished successfully Sep 3 23:23:07.677194 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:23:07.682340 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:23:07.695860 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:23:07.729597 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:23:07.730691 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:23:07.755455 initrd-setup-root-after-ignition[1290]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:07.759984 initrd-setup-root-after-ignition[1290]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:07.764242 initrd-setup-root-after-ignition[1294]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:07.768829 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:07.772586 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:23:07.781694 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:23:07.852407 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:23:07.852664 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:23:07.861758 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:23:07.864614 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:23:07.867423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:23:07.871554 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:23:07.917550 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:07.919160 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:23:07.966471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:07.972939 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:07.976619 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:23:07.981746 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:23:07.981985 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:07.989200 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:23:07.993970 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:23:07.996979 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:23:08.003252 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:08.012349 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:08.017791 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:23:08.021587 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:23:08.027308 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:23:08.031763 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:23:08.040965 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:23:08.045655 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:23:08.053256 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:23:08.053632 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:23:08.060054 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:08.065364 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:08.073414 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:23:08.079139 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:08.083698 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:23:08.083940 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:23:08.092807 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:23:08.093194 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:08.102123 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:23:08.102369 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:23:08.112670 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:23:08.119597 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:23:08.119984 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:08.133669 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:23:08.142943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:23:08.143947 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:08.148963 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:23:08.149288 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:23:08.185203 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:23:08.188377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:23:08.204794 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:23:08.212310 ignition[1314]: INFO : Ignition 2.21.0 Sep 3 23:23:08.212310 ignition[1314]: INFO : Stage: umount Sep 3 23:23:08.217268 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:08.217268 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:08.217268 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:08.226832 ignition[1314]: INFO : PUT result: OK Sep 3 23:23:08.236146 ignition[1314]: INFO : umount: umount passed Sep 3 23:23:08.238111 ignition[1314]: INFO : Ignition finished successfully Sep 3 23:23:08.243801 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:23:08.244072 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:23:08.250349 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:23:08.250443 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:23:08.255173 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:23:08.255279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:23:08.260216 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 3 23:23:08.260309 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 3 23:23:08.263134 systemd[1]: Stopped target network.target - Network. Sep 3 23:23:08.267732 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:23:08.267849 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:08.271782 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:23:08.275825 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:23:08.281107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:08.284266 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:23:08.288782 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:23:08.288952 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:23:08.289040 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:23:08.289164 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:23:08.289223 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:23:08.289313 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:23:08.289413 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:23:08.289613 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:23:08.289710 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:23:08.304543 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:23:08.309712 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:23:08.326138 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:23:08.326386 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:23:08.345452 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:23:08.345702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:23:08.353428 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:23:08.353967 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:23:08.354175 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:23:08.371597 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:23:08.373824 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:23:08.376848 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:23:08.376961 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:08.385877 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:23:08.386015 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:08.411650 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:23:08.450818 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:23:08.451605 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:23:08.460219 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:23:08.461346 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:08.467884 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:23:08.467994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:08.471067 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:23:08.471182 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:08.489657 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:08.503768 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:23:08.503928 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:08.520513 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:23:08.524603 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:08.532020 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:23:08.532139 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:08.542148 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:23:08.542246 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:08.544999 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:23:08.545118 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:23:08.553366 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:23:08.553513 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:23:08.559195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:23:08.559312 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:23:08.578340 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:23:08.581552 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:23:08.581691 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:08.597960 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:23:08.598066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:08.607768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:08.607881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:08.620965 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 3 23:23:08.621110 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 3 23:23:08.621204 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:08.622394 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:23:08.624589 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:23:08.649821 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:23:08.650024 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:23:08.653826 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:23:08.669791 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:23:08.710155 systemd[1]: Switching root. Sep 3 23:23:08.769779 systemd-journald[257]: Journal stopped Sep 3 23:23:10.949439 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Sep 3 23:23:10.949597 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:23:10.949649 kernel: SELinux: policy capability open_perms=1 Sep 3 23:23:10.949680 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:23:10.949727 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:23:10.949758 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:23:10.949789 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:23:10.949818 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:23:10.949847 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:23:10.949876 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:23:10.949905 kernel: audit: type=1403 audit(1756941789.057:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:23:10.949943 systemd[1]: Successfully loaded SELinux policy in 58.689ms. Sep 3 23:23:10.949995 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.800ms. Sep 3 23:23:10.950029 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:23:10.950061 systemd[1]: Detected virtualization amazon. Sep 3 23:23:10.950090 systemd[1]: Detected architecture arm64. Sep 3 23:23:10.950120 systemd[1]: Detected first boot. Sep 3 23:23:10.950160 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:23:10.950191 zram_generator::config[1357]: No configuration found. Sep 3 23:23:10.950222 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:23:10.950249 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:23:10.950285 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:23:10.950313 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:23:10.950346 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:23:10.950377 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:23:10.950408 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:23:10.950440 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:23:10.950471 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:23:10.963830 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:23:10.963879 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:23:10.963913 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:23:10.963945 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:23:10.963974 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:23:10.964007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:10.964041 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:10.964071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:23:10.964104 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:23:10.964145 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:23:10.964181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:23:10.964209 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 3 23:23:10.964239 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:10.964269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:10.964298 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:23:10.964329 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:23:10.964357 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:10.964387 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:23:10.964422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:10.964457 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:23:10.964517 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:23:10.964552 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:23:10.964582 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:23:10.964611 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:23:10.964643 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:23:10.964675 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:10.964708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:10.964744 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:10.964773 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:23:10.964800 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:23:10.964828 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:23:10.964882 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:23:10.964916 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:23:10.964947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:23:10.964975 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:23:10.965004 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:23:10.965039 systemd[1]: Reached target machines.target - Containers. Sep 3 23:23:10.965070 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:23:10.965101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:10.965129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:23:10.965158 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:23:10.965186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:10.965217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:10.965244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:10.965280 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:23:10.965309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:10.965338 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:23:10.965368 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:23:10.965400 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:23:10.965428 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:23:10.965455 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:23:10.966314 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:10.966388 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:23:10.966420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:23:10.966451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:23:10.966530 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:23:10.966565 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:23:10.966602 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:23:10.966634 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:23:10.966663 systemd[1]: Stopped verity-setup.service. Sep 3 23:23:10.966693 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:23:10.966721 kernel: loop: module loaded Sep 3 23:23:10.966754 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:23:10.966787 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:23:10.966815 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:23:10.966843 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:23:10.966874 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:23:10.966903 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:10.966932 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:23:10.966961 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:23:10.966992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:10.967021 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:10.967054 kernel: fuse: init (API version 7.41) Sep 3 23:23:10.967082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:10.967112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:10.967144 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:23:10.967173 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:23:10.967204 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:10.967233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:10.967262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:10.967291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:23:10.967329 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:10.967358 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:23:10.967387 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:23:10.967516 systemd-journald[1443]: Collecting audit messages is disabled. Sep 3 23:23:10.967580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:23:10.967614 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:23:10.967646 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:10.967683 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:23:10.967711 kernel: ACPI: bus type drm_connector registered Sep 3 23:23:10.967742 systemd-journald[1443]: Journal started Sep 3 23:23:10.967793 systemd-journald[1443]: Runtime Journal (/run/log/journal/ec2ded1bd257c4678d31690343c26cd6) is 8M, max 75.3M, 67.3M free. Sep 3 23:23:10.235083 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:23:10.251207 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 3 23:23:10.252042 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:23:10.978656 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:23:10.978734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:10.990696 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:23:10.996551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:11.004507 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:23:11.010596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:11.017642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:11.029045 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:23:11.043958 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:23:11.040612 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:23:11.046299 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:11.046747 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:11.050763 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:23:11.057229 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:23:11.060614 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:23:11.134702 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:23:11.142359 kernel: loop0: detected capacity change from 0 to 138376 Sep 3 23:23:11.148610 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:23:11.154199 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:23:11.169889 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:23:11.180937 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:23:11.198279 systemd-journald[1443]: Time spent on flushing to /var/log/journal/ec2ded1bd257c4678d31690343c26cd6 is 129.789ms for 939 entries. Sep 3 23:23:11.198279 systemd-journald[1443]: System Journal (/var/log/journal/ec2ded1bd257c4678d31690343c26cd6) is 8M, max 195.6M, 187.6M free. Sep 3 23:23:11.349649 systemd-journald[1443]: Received client request to flush runtime journal. Sep 3 23:23:11.349734 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:23:11.349768 kernel: loop1: detected capacity change from 0 to 61240 Sep 3 23:23:11.222847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:11.291644 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:23:11.294345 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:23:11.354558 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:23:11.384606 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:23:11.391954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:23:11.441317 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:11.491009 kernel: loop2: detected capacity change from 0 to 203944 Sep 3 23:23:11.492349 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Sep 3 23:23:11.492570 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Sep 3 23:23:11.513626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:11.619548 kernel: loop3: detected capacity change from 0 to 107312 Sep 3 23:23:11.692523 kernel: loop4: detected capacity change from 0 to 138376 Sep 3 23:23:11.728537 kernel: loop5: detected capacity change from 0 to 61240 Sep 3 23:23:11.761521 kernel: loop6: detected capacity change from 0 to 203944 Sep 3 23:23:11.811530 kernel: loop7: detected capacity change from 0 to 107312 Sep 3 23:23:11.848217 (sd-merge)[1515]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 3 23:23:11.850574 (sd-merge)[1515]: Merged extensions into '/usr'. Sep 3 23:23:11.864085 systemd[1]: Reload requested from client PID 1472 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:23:11.864118 systemd[1]: Reloading... Sep 3 23:23:12.101530 zram_generator::config[1541]: No configuration found. Sep 3 23:23:12.120598 ldconfig[1468]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:23:12.392267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:12.639058 systemd[1]: Reloading finished in 773 ms. Sep 3 23:23:12.674392 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:23:12.678307 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:23:12.683166 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:23:12.703124 systemd[1]: Starting ensure-sysext.service... Sep 3 23:23:12.709813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:23:12.720736 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:12.756953 systemd[1]: Reload requested from client PID 1594 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:23:12.756989 systemd[1]: Reloading... Sep 3 23:23:12.830186 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:23:12.830278 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:23:12.830932 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:23:12.833565 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:23:12.837699 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:23:12.838412 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Sep 3 23:23:12.838621 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Sep 3 23:23:12.854027 systemd-udevd[1596]: Using default interface naming scheme 'v255'. Sep 3 23:23:12.855136 systemd-tmpfiles[1595]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:12.855166 systemd-tmpfiles[1595]: Skipping /boot Sep 3 23:23:12.942536 systemd-tmpfiles[1595]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:12.944576 systemd-tmpfiles[1595]: Skipping /boot Sep 3 23:23:12.976532 zram_generator::config[1646]: No configuration found. Sep 3 23:23:13.369949 (udev-worker)[1629]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:23:13.376217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:13.671873 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 3 23:23:13.672665 systemd[1]: Reloading finished in 914 ms. Sep 3 23:23:13.690777 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:13.696626 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:13.748287 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:13.757891 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:23:13.768744 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:23:13.777976 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:23:13.803326 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:23:13.812925 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:23:13.844196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:13.850293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:13.860776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:13.882081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:13.887198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:13.887565 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:13.896639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:13.897056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:13.897255 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:13.906208 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:23:13.939600 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:23:13.948554 systemd[1]: Finished ensure-sysext.service. Sep 3 23:23:13.965723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:13.970205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:13.974788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:13.974878 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:13.974989 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:23:13.978890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:13.981620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:13.988280 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:13.989556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:13.996501 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:14.015106 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:23:14.029313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:23:14.051947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:14.058703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:14.065805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:14.079644 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:14.081961 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:14.112333 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:23:14.120073 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:23:14.131653 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:23:14.148535 augenrules[1818]: No rules Sep 3 23:23:14.153115 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:14.154721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:14.447214 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:23:14.471181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:14.596733 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 3 23:23:14.600013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:23:14.662584 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:23:14.733407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:14.818900 systemd-networkd[1749]: lo: Link UP Sep 3 23:23:14.818927 systemd-networkd[1749]: lo: Gained carrier Sep 3 23:23:14.822787 systemd-networkd[1749]: Enumeration completed Sep 3 23:23:14.823046 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:23:14.827071 systemd-resolved[1751]: Positive Trust Anchors: Sep 3 23:23:14.827106 systemd-resolved[1751]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:23:14.827172 systemd-resolved[1751]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:23:14.829678 systemd-networkd[1749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:14.829686 systemd-networkd[1749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:23:14.832981 systemd-networkd[1749]: eth0: Link UP Sep 3 23:23:14.833358 systemd-networkd[1749]: eth0: Gained carrier Sep 3 23:23:14.833400 systemd-networkd[1749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:14.834230 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:23:14.842999 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:23:14.859611 systemd-networkd[1749]: eth0: DHCPv4 address 172.31.18.182/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 3 23:23:14.860074 systemd-resolved[1751]: Defaulting to hostname 'linux'. Sep 3 23:23:14.867193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:23:14.870828 systemd[1]: Reached target network.target - Network. Sep 3 23:23:14.873276 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:14.876325 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:23:14.880068 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:23:14.883448 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:23:14.889705 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:23:14.894001 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:23:14.897946 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:23:14.905137 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:23:14.905231 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:23:14.908152 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:23:14.915631 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:23:14.923415 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:23:14.932962 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:23:14.936742 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:23:14.939981 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:23:14.947309 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:23:14.953569 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:23:14.958229 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:23:14.962053 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:23:14.966180 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:23:14.969066 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:23:14.972082 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:23:14.972369 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:23:14.975258 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:23:14.981823 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 3 23:23:14.998805 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:23:15.005752 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:23:15.013275 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:23:15.023988 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:23:15.028246 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:23:15.035006 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:23:15.045381 systemd[1]: Started ntpd.service - Network Time Service. Sep 3 23:23:15.067643 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:23:15.079409 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 3 23:23:15.090072 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:23:15.100419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:23:15.120928 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:23:15.123619 jq[1880]: false Sep 3 23:23:15.128187 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:23:15.129453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:23:15.134039 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:23:15.145686 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:23:15.163522 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:23:15.171372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:23:15.173551 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:23:15.262663 extend-filesystems[1881]: Found /dev/nvme0n1p6 Sep 3 23:23:15.292528 update_engine[1893]: I20250903 23:23:15.271257 1893 main.cc:92] Flatcar Update Engine starting Sep 3 23:23:15.301304 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:23:15.302007 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:23:15.314531 jq[1894]: true Sep 3 23:23:15.331610 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:32:01 UTC 2025 (1): Starting Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:32:01 UTC 2025 (1): Starting Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: ---------------------------------------------------- Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: corporation. Support and training for ntp-4 are Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: available at https://www.nwtime.org/support Sep 3 23:23:15.333632 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: ---------------------------------------------------- Sep 3 23:23:15.332240 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:23:15.332261 ntpd[1883]: ---------------------------------------------------- Sep 3 23:23:15.332279 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:23:15.332297 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:23:15.332313 ntpd[1883]: corporation. Support and training for ntp-4 are Sep 3 23:23:15.332336 ntpd[1883]: available at https://www.nwtime.org/support Sep 3 23:23:15.332353 ntpd[1883]: ---------------------------------------------------- Sep 3 23:23:15.342255 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:23:15.342931 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:23:15.355077 ntpd[1883]: proto: precision = 0.096 usec (-23) Sep 3 23:23:15.361074 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: proto: precision = 0.096 usec (-23) Sep 3 23:23:15.361347 extend-filesystems[1881]: Found /dev/nvme0n1p9 Sep 3 23:23:15.373724 extend-filesystems[1881]: Checking size of /dev/nvme0n1p9 Sep 3 23:23:15.379448 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: basedate set to 2025-08-22 Sep 3 23:23:15.379448 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: gps base set to 2025-08-24 (week 2381) Sep 3 23:23:15.359506 ntpd[1883]: basedate set to 2025-08-22 Sep 3 23:23:15.375397 (ntainerd)[1916]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:23:15.367597 ntpd[1883]: gps base set to 2025-08-24 (week 2381) Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Listen normally on 3 eth0 172.31.18.182:123 Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Listen normally on 4 lo [::1]:123 Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: bind(21) AF_INET6 fe80::4b6:31ff:fe03:bb25%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: unable to create socket on eth0 (5) for fe80::4b6:31ff:fe03:bb25%2#123 Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: failed to init interface for address fe80::4b6:31ff:fe03:bb25%2 Sep 3 23:23:15.386612 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: Listening on routing socket on fd #21 for interface updates Sep 3 23:23:15.383559 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:23:15.383654 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:23:15.385604 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:23:15.385685 ntpd[1883]: Listen normally on 3 eth0 172.31.18.182:123 Sep 3 23:23:15.385755 ntpd[1883]: Listen normally on 4 lo [::1]:123 Sep 3 23:23:15.385838 ntpd[1883]: bind(21) AF_INET6 fe80::4b6:31ff:fe03:bb25%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:15.385875 ntpd[1883]: unable to create socket on eth0 (5) for fe80::4b6:31ff:fe03:bb25%2#123 Sep 3 23:23:15.385902 ntpd[1883]: failed to init interface for address fe80::4b6:31ff:fe03:bb25%2 Sep 3 23:23:15.406871 tar[1902]: linux-arm64/helm Sep 3 23:23:15.385961 ntpd[1883]: Listening on routing socket on fd #21 for interface updates Sep 3 23:23:15.455058 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 3 23:23:15.462913 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:15.464348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:23:15.469756 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:15.469756 ntpd[1883]: 3 Sep 23:23:15 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:15.469876 jq[1920]: true Sep 3 23:23:15.462979 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:15.483252 dbus-daemon[1878]: [system] SELinux support is enabled Sep 3 23:23:15.491203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:23:15.503745 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:23:15.503837 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:23:15.507144 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:23:15.507189 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:23:15.517142 dbus-daemon[1878]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1749 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 3 23:23:15.523821 extend-filesystems[1881]: Resized partition /dev/nvme0n1p9 Sep 3 23:23:15.535536 extend-filesystems[1939]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:23:15.559701 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 3 23:23:15.547346 dbus-daemon[1878]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 3 23:23:15.561522 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 3 23:23:15.566839 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:23:15.567875 update_engine[1893]: I20250903 23:23:15.566867 1893 update_check_scheduler.cc:74] Next update check in 7m8s Sep 3 23:23:15.630915 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:23:15.671543 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 3 23:23:15.689536 extend-filesystems[1939]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 3 23:23:15.689536 extend-filesystems[1939]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 3 23:23:15.689536 extend-filesystems[1939]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 3 23:23:15.720948 extend-filesystems[1881]: Resized filesystem in /dev/nvme0n1p9 Sep 3 23:23:15.726717 coreos-metadata[1877]: Sep 03 23:23:15.711 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 3 23:23:15.726717 coreos-metadata[1877]: Sep 03 23:23:15.720 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 3 23:23:15.701138 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:23:15.702045 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:23:15.730692 coreos-metadata[1877]: Sep 03 23:23:15.729 INFO Fetch successful Sep 3 23:23:15.730692 coreos-metadata[1877]: Sep 03 23:23:15.729 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 3 23:23:15.738568 coreos-metadata[1877]: Sep 03 23:23:15.733 INFO Fetch successful Sep 3 23:23:15.738568 coreos-metadata[1877]: Sep 03 23:23:15.733 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 3 23:23:15.739272 coreos-metadata[1877]: Sep 03 23:23:15.738 INFO Fetch successful Sep 3 23:23:15.739272 coreos-metadata[1877]: Sep 03 23:23:15.739 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 3 23:23:15.744628 coreos-metadata[1877]: Sep 03 23:23:15.742 INFO Fetch successful Sep 3 23:23:15.744628 coreos-metadata[1877]: Sep 03 23:23:15.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 3 23:23:15.749934 coreos-metadata[1877]: Sep 03 23:23:15.749 INFO Fetch failed with 404: resource not found Sep 3 23:23:15.749934 coreos-metadata[1877]: Sep 03 23:23:15.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 3 23:23:15.760664 coreos-metadata[1877]: Sep 03 23:23:15.758 INFO Fetch successful Sep 3 23:23:15.760664 coreos-metadata[1877]: Sep 03 23:23:15.758 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 3 23:23:15.762555 coreos-metadata[1877]: Sep 03 23:23:15.762 INFO Fetch successful Sep 3 23:23:15.766237 coreos-metadata[1877]: Sep 03 23:23:15.764 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 3 23:23:15.769633 coreos-metadata[1877]: Sep 03 23:23:15.768 INFO Fetch successful Sep 3 23:23:15.769633 coreos-metadata[1877]: Sep 03 23:23:15.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 3 23:23:15.774353 coreos-metadata[1877]: Sep 03 23:23:15.774 INFO Fetch successful Sep 3 23:23:15.774353 coreos-metadata[1877]: Sep 03 23:23:15.774 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 3 23:23:15.777867 systemd-logind[1891]: Watching system buttons on /dev/input/event0 (Power Button) Sep 3 23:23:15.786312 coreos-metadata[1877]: Sep 03 23:23:15.780 INFO Fetch successful Sep 3 23:23:15.777928 systemd-logind[1891]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 3 23:23:15.802698 bash[1963]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:23:15.812236 systemd-logind[1891]: New seat seat0. Sep 3 23:23:15.813116 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:23:15.829906 systemd[1]: Starting sshkeys.service... Sep 3 23:23:15.831216 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:23:15.981393 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 3 23:23:15.989623 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 3 23:23:16.038963 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 3 23:23:16.043718 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:23:16.327426 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 3 23:23:16.331634 dbus-daemon[1878]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 3 23:23:16.336534 ntpd[1883]: bind(24) AF_INET6 fe80::4b6:31ff:fe03:bb25%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:16.338867 ntpd[1883]: 3 Sep 23:23:16 ntpd[1883]: bind(24) AF_INET6 fe80::4b6:31ff:fe03:bb25%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:16.338867 ntpd[1883]: 3 Sep 23:23:16 ntpd[1883]: unable to create socket on eth0 (6) for fe80::4b6:31ff:fe03:bb25%2#123 Sep 3 23:23:16.338867 ntpd[1883]: 3 Sep 23:23:16 ntpd[1883]: failed to init interface for address fe80::4b6:31ff:fe03:bb25%2 Sep 3 23:23:16.336620 ntpd[1883]: unable to create socket on eth0 (6) for fe80::4b6:31ff:fe03:bb25%2#123 Sep 3 23:23:16.336647 ntpd[1883]: failed to init interface for address fe80::4b6:31ff:fe03:bb25%2 Sep 3 23:23:16.350000 dbus-daemon[1878]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1940 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 3 23:23:16.366675 systemd[1]: Starting polkit.service - Authorization Manager... Sep 3 23:23:16.475976 coreos-metadata[1987]: Sep 03 23:23:16.475 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 3 23:23:16.485169 coreos-metadata[1987]: Sep 03 23:23:16.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 3 23:23:16.491804 coreos-metadata[1987]: Sep 03 23:23:16.487 INFO Fetch successful Sep 3 23:23:16.491804 coreos-metadata[1987]: Sep 03 23:23:16.487 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 3 23:23:16.493644 coreos-metadata[1987]: Sep 03 23:23:16.493 INFO Fetch successful Sep 3 23:23:16.497410 unknown[1987]: wrote ssh authorized keys file for user: core Sep 3 23:23:16.576210 containerd[1916]: time="2025-09-03T23:23:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:23:16.582629 containerd[1916]: time="2025-09-03T23:23:16.580422144Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:23:16.581515 locksmithd[1941]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:23:16.621532 update-ssh-keys[2062]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:23:16.622893 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 3 23:23:16.645331 systemd[1]: Finished sshkeys.service. Sep 3 23:23:16.650616 systemd-networkd[1749]: eth0: Gained IPv6LL Sep 3 23:23:16.665644 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:23:16.672968 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:23:16.685377 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 3 23:23:16.697166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:16.709704 containerd[1916]: time="2025-09-03T23:23:16.708970813Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.852µs" Sep 3 23:23:16.709520 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.709031221Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.710057317Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.710387161Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.710438989Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.710527969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.710705149Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.710737129Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.711182653Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.711229585Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.711260533Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:23:16.712237 containerd[1916]: time="2025-09-03T23:23:16.711284605Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:23:16.728288 containerd[1916]: time="2025-09-03T23:23:16.726598117Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:23:16.728288 containerd[1916]: time="2025-09-03T23:23:16.727146433Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:23:16.728288 containerd[1916]: time="2025-09-03T23:23:16.727219753Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:23:16.728288 containerd[1916]: time="2025-09-03T23:23:16.727248181Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:23:16.728288 containerd[1916]: time="2025-09-03T23:23:16.727323973Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:23:16.728949 containerd[1916]: time="2025-09-03T23:23:16.728865781Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:23:16.741511 containerd[1916]: time="2025-09-03T23:23:16.738110821Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.758756821Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.758945449Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759013717Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759075625Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759172753Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759207397Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759262753Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759297517Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759359785Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:23:16.759590 containerd[1916]: time="2025-09-03T23:23:16.759393025Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:23:16.760108 containerd[1916]: time="2025-09-03T23:23:16.759647965Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:23:16.761683 containerd[1916]: time="2025-09-03T23:23:16.761608789Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:23:16.761959 containerd[1916]: time="2025-09-03T23:23:16.761904301Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:23:16.762030 containerd[1916]: time="2025-09-03T23:23:16.761964565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:23:16.762030 containerd[1916]: time="2025-09-03T23:23:16.762013021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:23:16.762144 containerd[1916]: time="2025-09-03T23:23:16.762042793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:23:16.762144 containerd[1916]: time="2025-09-03T23:23:16.762071557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:23:16.762144 containerd[1916]: time="2025-09-03T23:23:16.762113677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:23:16.762274 containerd[1916]: time="2025-09-03T23:23:16.762143029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:23:16.762274 containerd[1916]: time="2025-09-03T23:23:16.762170869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:23:16.762274 containerd[1916]: time="2025-09-03T23:23:16.762199237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:23:16.762274 containerd[1916]: time="2025-09-03T23:23:16.762226357Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:23:16.762448 containerd[1916]: time="2025-09-03T23:23:16.762281797Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:23:16.774539 containerd[1916]: time="2025-09-03T23:23:16.772825069Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:23:16.774539 containerd[1916]: time="2025-09-03T23:23:16.772954681Z" level=info msg="Start snapshots syncer" Sep 3 23:23:16.774539 containerd[1916]: time="2025-09-03T23:23:16.773049157Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:23:16.777517 containerd[1916]: time="2025-09-03T23:23:16.776143501Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:23:16.782470 containerd[1916]: time="2025-09-03T23:23:16.782295937Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.786349981Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.786856381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.786949465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787000309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787042597Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787094269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787132825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787175929Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787260805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787309357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:23:16.789538 containerd[1916]: time="2025-09-03T23:23:16.787358293Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.787459309Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.801055825Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.801116881Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.801152053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.801187009Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.801226273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.801266341Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:23:16.801536 containerd[1916]: time="2025-09-03T23:23:16.801444709Z" level=info msg="runtime interface created" Sep 3 23:23:16.804836 containerd[1916]: time="2025-09-03T23:23:16.801464761Z" level=info msg="created NRI interface" Sep 3 23:23:16.804836 containerd[1916]: time="2025-09-03T23:23:16.804712609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:23:16.809556 containerd[1916]: time="2025-09-03T23:23:16.804772621Z" level=info msg="Connect containerd service" Sep 3 23:23:16.809556 containerd[1916]: time="2025-09-03T23:23:16.808741261Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:23:16.815391 containerd[1916]: time="2025-09-03T23:23:16.815323286Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:23:16.938933 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:23:17.094336 polkitd[2049]: Started polkitd version 126 Sep 3 23:23:17.147807 amazon-ssm-agent[2071]: Initializing new seelog logger Sep 3 23:23:17.148312 amazon-ssm-agent[2071]: New Seelog Logger Creation Complete Sep 3 23:23:17.148312 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.148312 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.152652 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 processing appconfig overrides Sep 3 23:23:17.156518 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.156518 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.156518 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 processing appconfig overrides Sep 3 23:23:17.156518 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.156518 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.156518 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 processing appconfig overrides Sep 3 23:23:17.157960 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.1550 INFO Proxy environment variables: Sep 3 23:23:17.161866 polkitd[2049]: Loading rules from directory /etc/polkit-1/rules.d Sep 3 23:23:17.165126 polkitd[2049]: Loading rules from directory /run/polkit-1/rules.d Sep 3 23:23:17.166669 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.166669 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.166669 amazon-ssm-agent[2071]: 2025/09/03 23:23:17 processing appconfig overrides Sep 3 23:23:17.165226 polkitd[2049]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:23:17.171005 polkitd[2049]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 3 23:23:17.171736 polkitd[2049]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:23:17.171926 polkitd[2049]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 3 23:23:17.178685 polkitd[2049]: Finished loading, compiling and executing 2 rules Sep 3 23:23:17.183404 systemd[1]: Started polkit.service - Authorization Manager. Sep 3 23:23:17.194348 dbus-daemon[1878]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 3 23:23:17.202919 polkitd[2049]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 3 23:23:17.261068 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.1550 INFO http_proxy: Sep 3 23:23:17.279876 systemd-hostnamed[1940]: Hostname set to (transient) Sep 3 23:23:17.282633 systemd-resolved[1751]: System hostname changed to 'ip-172-31-18-182'. Sep 3 23:23:17.290686 containerd[1916]: time="2025-09-03T23:23:17.290463444Z" level=info msg="Start subscribing containerd event" Sep 3 23:23:17.290686 containerd[1916]: time="2025-09-03T23:23:17.290641884Z" level=info msg="Start recovering state" Sep 3 23:23:17.290907 containerd[1916]: time="2025-09-03T23:23:17.290873856Z" level=info msg="Start event monitor" Sep 3 23:23:17.290962 containerd[1916]: time="2025-09-03T23:23:17.290922456Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:23:17.291009 containerd[1916]: time="2025-09-03T23:23:17.290969928Z" level=info msg="Start streaming server" Sep 3 23:23:17.291009 containerd[1916]: time="2025-09-03T23:23:17.290995368Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:23:17.291090 containerd[1916]: time="2025-09-03T23:23:17.291014976Z" level=info msg="runtime interface starting up..." Sep 3 23:23:17.291090 containerd[1916]: time="2025-09-03T23:23:17.291056292Z" level=info msg="starting plugins..." Sep 3 23:23:17.291192 containerd[1916]: time="2025-09-03T23:23:17.291097908Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:23:17.294421 containerd[1916]: time="2025-09-03T23:23:17.293007996Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:23:17.294421 containerd[1916]: time="2025-09-03T23:23:17.293130792Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:23:17.293371 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:23:17.307296 containerd[1916]: time="2025-09-03T23:23:17.307092492Z" level=info msg="containerd successfully booted in 0.734698s" Sep 3 23:23:17.361511 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.1550 INFO no_proxy: Sep 3 23:23:17.462576 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.1550 INFO https_proxy: Sep 3 23:23:17.562973 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.1553 INFO Checking if agent identity type OnPrem can be assumed Sep 3 23:23:17.659607 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.1554 INFO Checking if agent identity type EC2 can be assumed Sep 3 23:23:17.758986 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3771 INFO Agent will take identity from EC2 Sep 3 23:23:17.858375 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3791 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 3 23:23:17.946280 sshd_keygen[1923]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:23:17.958095 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3791 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 3 23:23:18.014716 tar[1902]: linux-arm64/LICENSE Sep 3 23:23:18.014716 tar[1902]: linux-arm64/README.md Sep 3 23:23:18.046251 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:23:18.057012 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3791 INFO [amazon-ssm-agent] Starting Core Agent Sep 3 23:23:18.059562 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:23:18.070312 systemd[1]: Started sshd@0-172.31.18.182:22-139.178.89.65:36428.service - OpenSSH per-connection server daemon (139.178.89.65:36428). Sep 3 23:23:18.096076 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:23:18.125397 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:23:18.126805 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:23:18.142052 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:23:18.159609 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3791 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 3 23:23:18.187562 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:23:18.202031 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:23:18.211214 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 3 23:23:18.217114 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:23:18.260599 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3791 INFO [Registrar] Starting registrar module Sep 3 23:23:18.361035 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3807 INFO [EC2Identity] Checking disk for registration info Sep 3 23:23:18.385437 sshd[2132]: Accepted publickey for core from 139.178.89.65 port 36428 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:18.388161 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:18.407100 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:23:18.416936 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:23:18.451161 systemd-logind[1891]: New session 1 of user core. Sep 3 23:23:18.461593 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3808 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 3 23:23:18.484657 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:23:18.504744 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:23:18.524919 (systemd)[2144]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:23:18.532017 systemd-logind[1891]: New session c1 of user core. Sep 3 23:23:18.562580 amazon-ssm-agent[2071]: 2025-09-03 23:23:17.3808 INFO [EC2Identity] Generating registration keypair Sep 3 23:23:18.620248 amazon-ssm-agent[2071]: 2025/09/03 23:23:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:18.620461 amazon-ssm-agent[2071]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:18.620734 amazon-ssm-agent[2071]: 2025/09/03 23:23:18 processing appconfig overrides Sep 3 23:23:18.652412 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.5748 INFO [EC2Identity] Checking write access before registering Sep 3 23:23:18.652701 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.5756 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 3 23:23:18.652701 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.6199 INFO [EC2Identity] EC2 registration was successful. Sep 3 23:23:18.652988 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.6200 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 3 23:23:18.652988 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.6200 INFO [CredentialRefresher] credentialRefresher has started Sep 3 23:23:18.652988 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.6201 INFO [CredentialRefresher] Starting credentials refresher loop Sep 3 23:23:18.652988 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.6518 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 3 23:23:18.652988 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.6522 INFO [CredentialRefresher] Credentials ready Sep 3 23:23:18.664653 amazon-ssm-agent[2071]: 2025-09-03 23:23:18.6530 INFO [CredentialRefresher] Next credential rotation will be in 29.9999813102 minutes Sep 3 23:23:18.860205 systemd[2144]: Queued start job for default target default.target. Sep 3 23:23:18.871871 systemd[2144]: Created slice app.slice - User Application Slice. Sep 3 23:23:18.871945 systemd[2144]: Reached target paths.target - Paths. Sep 3 23:23:18.872028 systemd[2144]: Reached target timers.target - Timers. Sep 3 23:23:18.876723 systemd[2144]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:23:18.903889 systemd[2144]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:23:18.904515 systemd[2144]: Reached target sockets.target - Sockets. Sep 3 23:23:18.904755 systemd[2144]: Reached target basic.target - Basic System. Sep 3 23:23:18.904916 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:23:18.907448 systemd[2144]: Reached target default.target - Main User Target. Sep 3 23:23:18.907562 systemd[2144]: Startup finished in 357ms. Sep 3 23:23:18.923806 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:23:19.083938 systemd[1]: Started sshd@1-172.31.18.182:22-139.178.89.65:36434.service - OpenSSH per-connection server daemon (139.178.89.65:36434). Sep 3 23:23:19.290773 sshd[2155]: Accepted publickey for core from 139.178.89.65 port 36434 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:19.293843 sshd-session[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:19.305579 systemd-logind[1891]: New session 2 of user core. Sep 3 23:23:19.313788 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:23:19.336471 ntpd[1883]: Listen normally on 7 eth0 [fe80::4b6:31ff:fe03:bb25%2]:123 Sep 3 23:23:19.338022 ntpd[1883]: 3 Sep 23:23:19 ntpd[1883]: Listen normally on 7 eth0 [fe80::4b6:31ff:fe03:bb25%2]:123 Sep 3 23:23:19.443204 sshd[2157]: Connection closed by 139.178.89.65 port 36434 Sep 3 23:23:19.443060 sshd-session[2155]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:19.449565 systemd[1]: sshd@1-172.31.18.182:22-139.178.89.65:36434.service: Deactivated successfully. Sep 3 23:23:19.454227 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:23:19.460953 systemd-logind[1891]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:23:19.486905 systemd[1]: Started sshd@2-172.31.18.182:22-139.178.89.65:36440.service - OpenSSH per-connection server daemon (139.178.89.65:36440). Sep 3 23:23:19.488982 systemd-logind[1891]: Removed session 2. Sep 3 23:23:19.683586 amazon-ssm-agent[2071]: 2025-09-03 23:23:19.6834 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 3 23:23:19.685266 sshd[2163]: Accepted publickey for core from 139.178.89.65 port 36440 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:19.688870 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:19.705955 systemd-logind[1891]: New session 3 of user core. Sep 3 23:23:19.713817 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:23:19.774801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:19.787324 amazon-ssm-agent[2071]: 2025-09-03 23:23:19.6921 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2167) started Sep 3 23:23:19.782229 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:23:19.786943 systemd[1]: Startup finished in 3.736s (kernel) + 9.382s (initrd) + 10.788s (userspace) = 23.907s. Sep 3 23:23:19.874344 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:19.890281 amazon-ssm-agent[2071]: 2025-09-03 23:23:19.6921 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 3 23:23:19.921693 sshd[2170]: Connection closed by 139.178.89.65 port 36440 Sep 3 23:23:19.922142 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:19.932052 systemd[1]: sshd@2-172.31.18.182:22-139.178.89.65:36440.service: Deactivated successfully. Sep 3 23:23:19.937579 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:23:19.941437 systemd-logind[1891]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:23:19.945950 systemd-logind[1891]: Removed session 3. Sep 3 23:23:21.310389 kubelet[2177]: E0903 23:23:21.310274 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:21.314815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:21.315126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:21.316035 systemd[1]: kubelet.service: Consumed 1.490s CPU time, 255.1M memory peak. Sep 3 23:23:22.013734 systemd-resolved[1751]: Clock change detected. Flushing caches. Sep 3 23:23:29.639852 systemd[1]: Started sshd@3-172.31.18.182:22-139.178.89.65:41880.service - OpenSSH per-connection server daemon (139.178.89.65:41880). Sep 3 23:23:29.838383 sshd[2200]: Accepted publickey for core from 139.178.89.65 port 41880 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:29.841788 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:29.851176 systemd-logind[1891]: New session 4 of user core. Sep 3 23:23:29.868040 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:23:29.994837 sshd[2202]: Connection closed by 139.178.89.65 port 41880 Sep 3 23:23:29.996258 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:30.002603 systemd-logind[1891]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:23:30.004785 systemd[1]: sshd@3-172.31.18.182:22-139.178.89.65:41880.service: Deactivated successfully. Sep 3 23:23:30.009021 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:23:30.013433 systemd-logind[1891]: Removed session 4. Sep 3 23:23:30.032397 systemd[1]: Started sshd@4-172.31.18.182:22-139.178.89.65:36828.service - OpenSSH per-connection server daemon (139.178.89.65:36828). Sep 3 23:23:30.231026 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 36828 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:30.233493 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:30.241794 systemd-logind[1891]: New session 5 of user core. Sep 3 23:23:30.248988 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:23:30.367397 sshd[2210]: Connection closed by 139.178.89.65 port 36828 Sep 3 23:23:30.367976 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:30.375219 systemd[1]: sshd@4-172.31.18.182:22-139.178.89.65:36828.service: Deactivated successfully. Sep 3 23:23:30.379818 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:23:30.383078 systemd-logind[1891]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:23:30.387072 systemd-logind[1891]: Removed session 5. Sep 3 23:23:30.406110 systemd[1]: Started sshd@5-172.31.18.182:22-139.178.89.65:36838.service - OpenSSH per-connection server daemon (139.178.89.65:36838). Sep 3 23:23:30.616246 sshd[2216]: Accepted publickey for core from 139.178.89.65 port 36838 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:30.618885 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:30.628794 systemd-logind[1891]: New session 6 of user core. Sep 3 23:23:30.634969 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:23:30.761912 sshd[2218]: Connection closed by 139.178.89.65 port 36838 Sep 3 23:23:30.761684 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:30.769486 systemd[1]: sshd@5-172.31.18.182:22-139.178.89.65:36838.service: Deactivated successfully. Sep 3 23:23:30.773197 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:23:30.776097 systemd-logind[1891]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:23:30.778963 systemd-logind[1891]: Removed session 6. Sep 3 23:23:30.802793 systemd[1]: Started sshd@6-172.31.18.182:22-139.178.89.65:36852.service - OpenSSH per-connection server daemon (139.178.89.65:36852). Sep 3 23:23:31.014484 sshd[2224]: Accepted publickey for core from 139.178.89.65 port 36852 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:31.017271 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:31.019043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:23:31.024170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:31.032132 systemd-logind[1891]: New session 7 of user core. Sep 3 23:23:31.037234 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:23:31.167337 sudo[2230]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:23:31.168046 sudo[2230]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:31.188408 sudo[2230]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:31.214735 sshd[2229]: Connection closed by 139.178.89.65 port 36852 Sep 3 23:23:31.215068 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:31.223011 systemd-logind[1891]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:23:31.223928 systemd[1]: sshd@6-172.31.18.182:22-139.178.89.65:36852.service: Deactivated successfully. Sep 3 23:23:31.227484 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:23:31.234836 systemd-logind[1891]: Removed session 7. Sep 3 23:23:31.255157 systemd[1]: Started sshd@7-172.31.18.182:22-139.178.89.65:36854.service - OpenSSH per-connection server daemon (139.178.89.65:36854). Sep 3 23:23:31.410287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:31.426440 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:31.460973 sshd[2236]: Accepted publickey for core from 139.178.89.65 port 36854 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:31.465666 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:31.478108 systemd-logind[1891]: New session 8 of user core. Sep 3 23:23:31.485525 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:23:31.518549 kubelet[2243]: E0903 23:23:31.518489 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:31.526293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:31.526824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:31.528822 systemd[1]: kubelet.service: Consumed 326ms CPU time, 106M memory peak. Sep 3 23:23:31.591021 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:23:31.592107 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:31.599789 sudo[2252]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:31.609428 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:23:31.610152 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:31.626478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:31.689958 augenrules[2274]: No rules Sep 3 23:23:31.692573 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:31.693811 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:31.696075 sudo[2251]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:31.719496 sshd[2249]: Connection closed by 139.178.89.65 port 36854 Sep 3 23:23:31.720452 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:31.726136 systemd-logind[1891]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:23:31.726744 systemd[1]: sshd@7-172.31.18.182:22-139.178.89.65:36854.service: Deactivated successfully. Sep 3 23:23:31.729635 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:23:31.733642 systemd-logind[1891]: Removed session 8. Sep 3 23:23:31.755792 systemd[1]: Started sshd@8-172.31.18.182:22-139.178.89.65:36868.service - OpenSSH per-connection server daemon (139.178.89.65:36868). Sep 3 23:23:31.952557 sshd[2283]: Accepted publickey for core from 139.178.89.65 port 36868 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:31.955378 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:31.964802 systemd-logind[1891]: New session 9 of user core. Sep 3 23:23:31.969094 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:23:32.075510 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:23:32.076633 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:32.660935 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:23:32.676183 (dockerd)[2303]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:23:33.087251 dockerd[2303]: time="2025-09-03T23:23:33.086819327Z" level=info msg="Starting up" Sep 3 23:23:33.088605 dockerd[2303]: time="2025-09-03T23:23:33.088555055Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:23:33.136225 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4012740127-merged.mount: Deactivated successfully. Sep 3 23:23:33.178829 dockerd[2303]: time="2025-09-03T23:23:33.178764707Z" level=info msg="Loading containers: start." Sep 3 23:23:33.195748 kernel: Initializing XFRM netlink socket Sep 3 23:23:33.512657 (udev-worker)[2327]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:23:33.590659 systemd-networkd[1749]: docker0: Link UP Sep 3 23:23:33.601000 dockerd[2303]: time="2025-09-03T23:23:33.600938065Z" level=info msg="Loading containers: done." Sep 3 23:23:33.629774 dockerd[2303]: time="2025-09-03T23:23:33.629669821Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:23:33.630060 dockerd[2303]: time="2025-09-03T23:23:33.629833693Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:23:33.630060 dockerd[2303]: time="2025-09-03T23:23:33.630025081Z" level=info msg="Initializing buildkit" Sep 3 23:23:33.680748 dockerd[2303]: time="2025-09-03T23:23:33.680663053Z" level=info msg="Completed buildkit initialization" Sep 3 23:23:33.698220 dockerd[2303]: time="2025-09-03T23:23:33.698123066Z" level=info msg="Daemon has completed initialization" Sep 3 23:23:33.698603 dockerd[2303]: time="2025-09-03T23:23:33.698419154Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:23:33.699869 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:23:34.129236 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2706228525-merged.mount: Deactivated successfully. Sep 3 23:23:35.031249 containerd[1916]: time="2025-09-03T23:23:35.031115844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 3 23:23:35.654786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692299902.mount: Deactivated successfully. Sep 3 23:23:37.031723 containerd[1916]: time="2025-09-03T23:23:37.031058522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:37.033389 containerd[1916]: time="2025-09-03T23:23:37.033342050Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652441" Sep 3 23:23:37.034465 containerd[1916]: time="2025-09-03T23:23:37.034430042Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:37.038901 containerd[1916]: time="2025-09-03T23:23:37.038852114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:37.041844 containerd[1916]: time="2025-09-03T23:23:37.040805426Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 2.009614954s" Sep 3 23:23:37.041844 containerd[1916]: time="2025-09-03T23:23:37.040864634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 3 23:23:37.044657 containerd[1916]: time="2025-09-03T23:23:37.044610842Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 3 23:23:38.435740 containerd[1916]: time="2025-09-03T23:23:38.435290705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:38.437078 containerd[1916]: time="2025-09-03T23:23:38.437022629Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460309" Sep 3 23:23:38.438100 containerd[1916]: time="2025-09-03T23:23:38.438014225Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:38.443737 containerd[1916]: time="2025-09-03T23:23:38.442850945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:38.445513 containerd[1916]: time="2025-09-03T23:23:38.444798269Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.399933243s" Sep 3 23:23:38.445513 containerd[1916]: time="2025-09-03T23:23:38.444855377Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 3 23:23:38.445967 containerd[1916]: time="2025-09-03T23:23:38.445932149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 3 23:23:39.673943 containerd[1916]: time="2025-09-03T23:23:39.673878703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:39.675464 containerd[1916]: time="2025-09-03T23:23:39.675392947Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125903" Sep 3 23:23:39.676723 containerd[1916]: time="2025-09-03T23:23:39.676371007Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:39.682767 containerd[1916]: time="2025-09-03T23:23:39.682718479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:39.684786 containerd[1916]: time="2025-09-03T23:23:39.684746827Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.238657934s" Sep 3 23:23:39.684993 containerd[1916]: time="2025-09-03T23:23:39.684965059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 3 23:23:39.685901 containerd[1916]: time="2025-09-03T23:23:39.685858423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 3 23:23:40.987929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317122452.mount: Deactivated successfully. Sep 3 23:23:41.544465 containerd[1916]: time="2025-09-03T23:23:41.544349793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:41.550869 containerd[1916]: time="2025-09-03T23:23:41.550775265Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916095" Sep 3 23:23:41.551015 containerd[1916]: time="2025-09-03T23:23:41.550951185Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:41.556265 containerd[1916]: time="2025-09-03T23:23:41.556186137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:41.559639 containerd[1916]: time="2025-09-03T23:23:41.559574769Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.873501918s" Sep 3 23:23:41.559639 containerd[1916]: time="2025-09-03T23:23:41.559639401Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 3 23:23:41.560427 containerd[1916]: time="2025-09-03T23:23:41.560384133Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 3 23:23:41.597450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:23:41.599959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:41.933354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:41.949216 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:42.034050 kubelet[2587]: E0903 23:23:42.033962 2587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:42.039251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:42.039986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:42.041067 systemd[1]: kubelet.service: Consumed 306ms CPU time, 105.4M memory peak. Sep 3 23:23:42.152795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812067921.mount: Deactivated successfully. Sep 3 23:23:43.444902 containerd[1916]: time="2025-09-03T23:23:43.444810502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:43.446878 containerd[1916]: time="2025-09-03T23:23:43.446808286Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 3 23:23:43.449360 containerd[1916]: time="2025-09-03T23:23:43.449285818Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:43.455545 containerd[1916]: time="2025-09-03T23:23:43.454684390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:43.456736 containerd[1916]: time="2025-09-03T23:23:43.456662758Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.896223269s" Sep 3 23:23:43.456861 containerd[1916]: time="2025-09-03T23:23:43.456735598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 3 23:23:43.457454 containerd[1916]: time="2025-09-03T23:23:43.457314286Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:23:43.977158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811814643.mount: Deactivated successfully. Sep 3 23:23:43.991724 containerd[1916]: time="2025-09-03T23:23:43.990877117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:43.994258 containerd[1916]: time="2025-09-03T23:23:43.994219681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 3 23:23:43.996412 containerd[1916]: time="2025-09-03T23:23:43.996375073Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:44.001491 containerd[1916]: time="2025-09-03T23:23:44.001440249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:44.002824 containerd[1916]: time="2025-09-03T23:23:44.002769657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 544.026447ms" Sep 3 23:23:44.002947 containerd[1916]: time="2025-09-03T23:23:44.002823789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:23:44.003940 containerd[1916]: time="2025-09-03T23:23:44.003897705Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 3 23:23:44.582665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950295832.mount: Deactivated successfully. Sep 3 23:23:46.573571 containerd[1916]: time="2025-09-03T23:23:46.573481357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:46.576727 containerd[1916]: time="2025-09-03T23:23:46.576225014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 3 23:23:46.578941 containerd[1916]: time="2025-09-03T23:23:46.578881718Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:46.584778 containerd[1916]: time="2025-09-03T23:23:46.584727014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:46.586806 containerd[1916]: time="2025-09-03T23:23:46.586760114Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.582660137s" Sep 3 23:23:46.586977 containerd[1916]: time="2025-09-03T23:23:46.586948526Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 3 23:23:46.992376 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 3 23:23:52.098858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 3 23:23:52.104015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:52.424948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:52.438338 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:52.519714 kubelet[2736]: E0903 23:23:52.519388 2736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:52.523588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:52.524835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:52.526823 systemd[1]: kubelet.service: Consumed 298ms CPU time, 107.1M memory peak. Sep 3 23:23:53.982107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:53.982687 systemd[1]: kubelet.service: Consumed 298ms CPU time, 107.1M memory peak. Sep 3 23:23:53.986800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:54.041589 systemd[1]: Reload requested from client PID 2750 ('systemctl') (unit session-9.scope)... Sep 3 23:23:54.041631 systemd[1]: Reloading... Sep 3 23:23:54.302789 zram_generator::config[2797]: No configuration found. Sep 3 23:23:54.495094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:54.753154 systemd[1]: Reloading finished in 710 ms. Sep 3 23:23:54.859034 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:23:54.859388 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:23:54.860073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:54.860147 systemd[1]: kubelet.service: Consumed 223ms CPU time, 95M memory peak. Sep 3 23:23:54.865068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:55.211590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:55.229277 (kubelet)[2857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:23:55.304872 kubelet[2857]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:55.304872 kubelet[2857]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:23:55.304872 kubelet[2857]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:55.305415 kubelet[2857]: I0903 23:23:55.304961 2857 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:23:56.444176 kubelet[2857]: I0903 23:23:56.444110 2857 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:23:56.444176 kubelet[2857]: I0903 23:23:56.444158 2857 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:23:56.444894 kubelet[2857]: I0903 23:23:56.444572 2857 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:23:56.516942 kubelet[2857]: E0903 23:23:56.516853 2857 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.182:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.182:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:56.520727 kubelet[2857]: I0903 23:23:56.519982 2857 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:23:56.535579 kubelet[2857]: I0903 23:23:56.535546 2857 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:23:56.544377 kubelet[2857]: I0903 23:23:56.544342 2857 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:23:56.545272 kubelet[2857]: I0903 23:23:56.545249 2857 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:23:56.545675 kubelet[2857]: I0903 23:23:56.545631 2857 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:23:56.546110 kubelet[2857]: I0903 23:23:56.545800 2857 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-182","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:23:56.546331 kubelet[2857]: I0903 23:23:56.546311 2857 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:23:56.546458 kubelet[2857]: I0903 23:23:56.546440 2857 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:23:56.546867 kubelet[2857]: I0903 23:23:56.546848 2857 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:56.552825 kubelet[2857]: I0903 23:23:56.552787 2857 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:23:56.553015 kubelet[2857]: I0903 23:23:56.552996 2857 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:23:56.553134 kubelet[2857]: I0903 23:23:56.553115 2857 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:23:56.553378 kubelet[2857]: I0903 23:23:56.553359 2857 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:23:56.562056 kubelet[2857]: W0903 23:23:56.560666 2857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-182&limit=500&resourceVersion=0": dial tcp 172.31.18.182:6443: connect: connection refused Sep 3 23:23:56.562056 kubelet[2857]: E0903 23:23:56.560859 2857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-182&limit=500&resourceVersion=0\": dial tcp 172.31.18.182:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:56.562360 kubelet[2857]: W0903 23:23:56.562182 2857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.182:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.182:6443: connect: connection refused Sep 3 23:23:56.562360 kubelet[2857]: E0903 23:23:56.562269 2857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.182:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.182:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:56.562733 kubelet[2857]: I0903 23:23:56.562674 2857 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:23:56.563992 kubelet[2857]: I0903 23:23:56.563948 2857 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:23:56.564188 kubelet[2857]: W0903 23:23:56.564155 2857 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:23:56.567998 kubelet[2857]: I0903 23:23:56.567817 2857 server.go:1274] "Started kubelet" Sep 3 23:23:56.573934 kubelet[2857]: I0903 23:23:56.573876 2857 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:23:56.576319 kubelet[2857]: I0903 23:23:56.576283 2857 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:23:56.576567 kubelet[2857]: E0903 23:23:56.574237 2857 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.182:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.182:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-182.1861e94aaba5203f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-182,UID:ip-172-31-18-182,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-182,},FirstTimestamp:2025-09-03 23:23:56.567781439 +0000 UTC m=+1.331712691,LastTimestamp:2025-09-03 23:23:56.567781439 +0000 UTC m=+1.331712691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-182,}" Sep 3 23:23:56.582729 kubelet[2857]: I0903 23:23:56.573889 2857 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:23:56.582729 kubelet[2857]: I0903 23:23:56.581529 2857 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:23:56.584911 kubelet[2857]: I0903 23:23:56.584878 2857 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:23:56.586555 kubelet[2857]: I0903 23:23:56.586489 2857 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:23:56.596938 kubelet[2857]: E0903 23:23:56.596871 2857 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:23:56.597620 kubelet[2857]: I0903 23:23:56.597573 2857 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:23:56.597860 kubelet[2857]: I0903 23:23:56.597827 2857 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:23:56.597963 kubelet[2857]: I0903 23:23:56.597934 2857 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:23:56.599053 kubelet[2857]: W0903 23:23:56.598905 2857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.182:6443: connect: connection refused Sep 3 23:23:56.599168 kubelet[2857]: E0903 23:23:56.599090 2857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.182:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:56.599806 kubelet[2857]: E0903 23:23:56.599586 2857 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-182\" not found" Sep 3 23:23:56.599910 kubelet[2857]: I0903 23:23:56.599837 2857 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:23:56.600549 kubelet[2857]: I0903 23:23:56.600492 2857 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:23:56.604207 kubelet[2857]: E0903 23:23:56.604091 2857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-182?timeout=10s\": dial tcp 172.31.18.182:6443: connect: connection refused" interval="200ms" Sep 3 23:23:56.604509 kubelet[2857]: I0903 23:23:56.604476 2857 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:23:56.624414 kubelet[2857]: I0903 23:23:56.624355 2857 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:23:56.626655 kubelet[2857]: I0903 23:23:56.626613 2857 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:23:56.627323 kubelet[2857]: I0903 23:23:56.626848 2857 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:23:56.627323 kubelet[2857]: I0903 23:23:56.626890 2857 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:23:56.627323 kubelet[2857]: E0903 23:23:56.626961 2857 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:23:56.644323 kubelet[2857]: W0903 23:23:56.644249 2857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.182:6443: connect: connection refused Sep 3 23:23:56.644836 kubelet[2857]: E0903 23:23:56.644796 2857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.182:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:56.652799 kubelet[2857]: I0903 23:23:56.652390 2857 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:23:56.652799 kubelet[2857]: I0903 23:23:56.652419 2857 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:23:56.652799 kubelet[2857]: I0903 23:23:56.652449 2857 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:56.656480 kubelet[2857]: I0903 23:23:56.656449 2857 policy_none.go:49] "None policy: Start" Sep 3 23:23:56.657931 kubelet[2857]: I0903 23:23:56.657906 2857 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:23:56.658160 kubelet[2857]: I0903 23:23:56.658143 2857 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:23:56.669969 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:23:56.688985 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:23:56.697229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:23:56.700270 kubelet[2857]: E0903 23:23:56.700206 2857 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-182\" not found" Sep 3 23:23:56.708609 kubelet[2857]: I0903 23:23:56.708535 2857 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:23:56.709819 kubelet[2857]: I0903 23:23:56.709337 2857 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:23:56.709819 kubelet[2857]: I0903 23:23:56.709369 2857 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:23:56.710006 kubelet[2857]: I0903 23:23:56.709879 2857 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:23:56.714502 kubelet[2857]: E0903 23:23:56.714136 2857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-182\" not found" Sep 3 23:23:56.749681 systemd[1]: Created slice kubepods-burstable-pod3838b41b307dad59fa4f8f369adc4063.slice - libcontainer container kubepods-burstable-pod3838b41b307dad59fa4f8f369adc4063.slice. Sep 3 23:23:56.771271 systemd[1]: Created slice kubepods-burstable-pod291e1e8f01b6da22f8885c9ff2bfcb4a.slice - libcontainer container kubepods-burstable-pod291e1e8f01b6da22f8885c9ff2bfcb4a.slice. Sep 3 23:23:56.782199 systemd[1]: Created slice kubepods-burstable-poddeb25ed6f452b689815fcdf9281252a1.slice - libcontainer container kubepods-burstable-poddeb25ed6f452b689815fcdf9281252a1.slice. Sep 3 23:23:56.805344 kubelet[2857]: E0903 23:23:56.805276 2857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-182?timeout=10s\": dial tcp 172.31.18.182:6443: connect: connection refused" interval="400ms" Sep 3 23:23:56.812629 kubelet[2857]: I0903 23:23:56.811911 2857 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-182" Sep 3 23:23:56.813019 kubelet[2857]: E0903 23:23:56.812960 2857 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.182:6443/api/v1/nodes\": dial tcp 172.31.18.182:6443: connect: connection refused" node="ip-172-31-18-182" Sep 3 23:23:56.899441 kubelet[2857]: I0903 23:23:56.899399 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3838b41b307dad59fa4f8f369adc4063-ca-certs\") pod \"kube-apiserver-ip-172-31-18-182\" (UID: \"3838b41b307dad59fa4f8f369adc4063\") " pod="kube-system/kube-apiserver-ip-172-31-18-182" Sep 3 23:23:56.899635 kubelet[2857]: I0903 23:23:56.899471 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:23:56.899635 kubelet[2857]: I0903 23:23:56.899511 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:23:56.899635 kubelet[2857]: I0903 23:23:56.899556 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:23:56.899635 kubelet[2857]: I0903 23:23:56.899592 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deb25ed6f452b689815fcdf9281252a1-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-182\" (UID: \"deb25ed6f452b689815fcdf9281252a1\") " pod="kube-system/kube-scheduler-ip-172-31-18-182" Sep 3 23:23:56.899635 kubelet[2857]: I0903 23:23:56.899624 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3838b41b307dad59fa4f8f369adc4063-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-182\" (UID: \"3838b41b307dad59fa4f8f369adc4063\") " pod="kube-system/kube-apiserver-ip-172-31-18-182" Sep 3 23:23:56.900098 kubelet[2857]: I0903 23:23:56.899659 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3838b41b307dad59fa4f8f369adc4063-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-182\" (UID: \"3838b41b307dad59fa4f8f369adc4063\") " pod="kube-system/kube-apiserver-ip-172-31-18-182" Sep 3 23:23:56.900098 kubelet[2857]: I0903 23:23:56.899737 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:23:56.900098 kubelet[2857]: I0903 23:23:56.899780 2857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:23:57.018088 kubelet[2857]: I0903 23:23:57.016906 2857 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-182" Sep 3 23:23:57.018088 kubelet[2857]: E0903 23:23:57.017880 2857 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.182:6443/api/v1/nodes\": dial tcp 172.31.18.182:6443: connect: connection refused" node="ip-172-31-18-182" Sep 3 23:23:57.068095 containerd[1916]: time="2025-09-03T23:23:57.068007082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-182,Uid:3838b41b307dad59fa4f8f369adc4063,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:57.078984 containerd[1916]: time="2025-09-03T23:23:57.078936178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-182,Uid:291e1e8f01b6da22f8885c9ff2bfcb4a,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:57.088184 containerd[1916]: time="2025-09-03T23:23:57.088121554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-182,Uid:deb25ed6f452b689815fcdf9281252a1,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:57.195141 containerd[1916]: time="2025-09-03T23:23:57.195062062Z" level=info msg="connecting to shim 26a1c398005f277f3a53369e1e3854a5c725f2928657b42521d9b33d8a67cf56" address="unix:///run/containerd/s/2478ef8e7a94313e6ab3a8d20fc4104131d6e4a1ac9f1c80aefdd664c0509ee2" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:57.207566 kubelet[2857]: E0903 23:23:57.207485 2857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-182?timeout=10s\": dial tcp 172.31.18.182:6443: connect: connection refused" interval="800ms" Sep 3 23:23:57.221772 containerd[1916]: time="2025-09-03T23:23:57.221329990Z" level=info msg="connecting to shim b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d" address="unix:///run/containerd/s/d5cba1dc243dcdf69af0d63c63908288390cca07bc11b489fb6e4f7e2cddff71" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:57.273325 systemd[1]: Started cri-containerd-b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d.scope - libcontainer container b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d. Sep 3 23:23:57.284109 containerd[1916]: time="2025-09-03T23:23:57.283887635Z" level=info msg="connecting to shim 66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2" address="unix:///run/containerd/s/bf4ac10640d90be42ea6155531503b187fe47f2d7221ac1e3216fa07cacd5e7e" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:57.323987 systemd[1]: Started cri-containerd-26a1c398005f277f3a53369e1e3854a5c725f2928657b42521d9b33d8a67cf56.scope - libcontainer container 26a1c398005f277f3a53369e1e3854a5c725f2928657b42521d9b33d8a67cf56. Sep 3 23:23:57.342584 systemd[1]: Started cri-containerd-66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2.scope - libcontainer container 66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2. Sep 3 23:23:57.425846 kubelet[2857]: I0903 23:23:57.425780 2857 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-182" Sep 3 23:23:57.427645 kubelet[2857]: E0903 23:23:57.427407 2857 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.182:6443/api/v1/nodes\": dial tcp 172.31.18.182:6443: connect: connection refused" node="ip-172-31-18-182" Sep 3 23:23:57.466098 containerd[1916]: time="2025-09-03T23:23:57.465875520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-182,Uid:291e1e8f01b6da22f8885c9ff2bfcb4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d\"" Sep 3 23:23:57.476686 containerd[1916]: time="2025-09-03T23:23:57.476557752Z" level=info msg="CreateContainer within sandbox \"b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:23:57.477796 containerd[1916]: time="2025-09-03T23:23:57.477281700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-182,Uid:3838b41b307dad59fa4f8f369adc4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"26a1c398005f277f3a53369e1e3854a5c725f2928657b42521d9b33d8a67cf56\"" Sep 3 23:23:57.489279 containerd[1916]: time="2025-09-03T23:23:57.488820372Z" level=info msg="CreateContainer within sandbox \"26a1c398005f277f3a53369e1e3854a5c725f2928657b42521d9b33d8a67cf56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:23:57.504598 containerd[1916]: time="2025-09-03T23:23:57.504438444Z" level=info msg="Container f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:57.510638 containerd[1916]: time="2025-09-03T23:23:57.510510468Z" level=info msg="Container f349d98da8556531053818d0d435f91e69e9af2f154cec14350aedc9bbadaa57: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:57.511089 containerd[1916]: time="2025-09-03T23:23:57.511002624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-182,Uid:deb25ed6f452b689815fcdf9281252a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2\"" Sep 3 23:23:57.516773 containerd[1916]: time="2025-09-03T23:23:57.516332376Z" level=info msg="CreateContainer within sandbox \"66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:23:57.526466 containerd[1916]: time="2025-09-03T23:23:57.525791556Z" level=info msg="CreateContainer within sandbox \"b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b\"" Sep 3 23:23:57.527871 containerd[1916]: time="2025-09-03T23:23:57.527810916Z" level=info msg="StartContainer for \"f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b\"" Sep 3 23:23:57.530008 containerd[1916]: time="2025-09-03T23:23:57.529947312Z" level=info msg="connecting to shim f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b" address="unix:///run/containerd/s/d5cba1dc243dcdf69af0d63c63908288390cca07bc11b489fb6e4f7e2cddff71" protocol=ttrpc version=3 Sep 3 23:23:57.543131 containerd[1916]: time="2025-09-03T23:23:57.543042744Z" level=info msg="CreateContainer within sandbox \"26a1c398005f277f3a53369e1e3854a5c725f2928657b42521d9b33d8a67cf56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f349d98da8556531053818d0d435f91e69e9af2f154cec14350aedc9bbadaa57\"" Sep 3 23:23:57.544204 containerd[1916]: time="2025-09-03T23:23:57.544086924Z" level=info msg="StartContainer for \"f349d98da8556531053818d0d435f91e69e9af2f154cec14350aedc9bbadaa57\"" Sep 3 23:23:57.548557 containerd[1916]: time="2025-09-03T23:23:57.548372232Z" level=info msg="connecting to shim f349d98da8556531053818d0d435f91e69e9af2f154cec14350aedc9bbadaa57" address="unix:///run/containerd/s/2478ef8e7a94313e6ab3a8d20fc4104131d6e4a1ac9f1c80aefdd664c0509ee2" protocol=ttrpc version=3 Sep 3 23:23:57.551424 containerd[1916]: time="2025-09-03T23:23:57.551320812Z" level=info msg="Container cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:57.577315 systemd[1]: Started cri-containerd-f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b.scope - libcontainer container f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b. Sep 3 23:23:57.579823 containerd[1916]: time="2025-09-03T23:23:57.579760008Z" level=info msg="CreateContainer within sandbox \"66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66\"" Sep 3 23:23:57.584389 containerd[1916]: time="2025-09-03T23:23:57.583927524Z" level=info msg="StartContainer for \"cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66\"" Sep 3 23:23:57.588774 containerd[1916]: time="2025-09-03T23:23:57.588342672Z" level=info msg="connecting to shim cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66" address="unix:///run/containerd/s/bf4ac10640d90be42ea6155531503b187fe47f2d7221ac1e3216fa07cacd5e7e" protocol=ttrpc version=3 Sep 3 23:23:57.600729 kubelet[2857]: W0903 23:23:57.598969 2857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.182:6443: connect: connection refused Sep 3 23:23:57.600729 kubelet[2857]: E0903 23:23:57.599219 2857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.182:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:57.606291 systemd[1]: Started cri-containerd-f349d98da8556531053818d0d435f91e69e9af2f154cec14350aedc9bbadaa57.scope - libcontainer container f349d98da8556531053818d0d435f91e69e9af2f154cec14350aedc9bbadaa57. Sep 3 23:23:57.660117 systemd[1]: Started cri-containerd-cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66.scope - libcontainer container cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66. Sep 3 23:23:57.664649 kubelet[2857]: W0903 23:23:57.664572 2857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-182&limit=500&resourceVersion=0": dial tcp 172.31.18.182:6443: connect: connection refused Sep 3 23:23:57.664954 kubelet[2857]: E0903 23:23:57.664888 2857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-182&limit=500&resourceVersion=0\": dial tcp 172.31.18.182:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:57.768196 containerd[1916]: time="2025-09-03T23:23:57.767916949Z" level=info msg="StartContainer for \"f349d98da8556531053818d0d435f91e69e9af2f154cec14350aedc9bbadaa57\" returns successfully" Sep 3 23:23:57.781608 containerd[1916]: time="2025-09-03T23:23:57.781439593Z" level=info msg="StartContainer for \"f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b\" returns successfully" Sep 3 23:23:57.850787 containerd[1916]: time="2025-09-03T23:23:57.850719266Z" level=info msg="StartContainer for \"cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66\" returns successfully" Sep 3 23:23:58.008986 kubelet[2857]: E0903 23:23:58.008904 2857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-182?timeout=10s\": dial tcp 172.31.18.182:6443: connect: connection refused" interval="1.6s" Sep 3 23:23:58.230399 kubelet[2857]: I0903 23:23:58.230338 2857 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-182" Sep 3 23:24:00.558546 update_engine[1893]: I20250903 23:24:00.557750 1893 update_attempter.cc:509] Updating boot flags... Sep 3 23:24:01.564891 kubelet[2857]: I0903 23:24:01.564520 2857 apiserver.go:52] "Watching apiserver" Sep 3 23:24:01.800249 kubelet[2857]: I0903 23:24:01.800154 2857 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:24:01.884158 kubelet[2857]: E0903 23:24:01.884097 2857 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-182\" not found" node="ip-172-31-18-182" Sep 3 23:24:02.002969 kubelet[2857]: E0903 23:24:02.001660 2857 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-182.1861e94aaba5203f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-182,UID:ip-172-31-18-182,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-182,},FirstTimestamp:2025-09-03 23:23:56.567781439 +0000 UTC m=+1.331712691,LastTimestamp:2025-09-03 23:23:56.567781439 +0000 UTC m=+1.331712691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-182,}" Sep 3 23:24:02.070729 kubelet[2857]: I0903 23:24:02.067016 2857 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-182" Sep 3 23:24:02.070729 kubelet[2857]: E0903 23:24:02.067071 2857 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-182\": node \"ip-172-31-18-182\" not found" Sep 3 23:24:02.093814 kubelet[2857]: E0903 23:24:02.093505 2857 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-182.1861e94aad60a5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-182,UID:ip-172-31-18-182,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-18-182,},FirstTimestamp:2025-09-03 23:23:56.596848055 +0000 UTC m=+1.360779319,LastTimestamp:2025-09-03 23:23:56.596848055 +0000 UTC m=+1.360779319,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-182,}" Sep 3 23:24:04.211597 systemd[1]: Reload requested from client PID 3399 ('systemctl') (unit session-9.scope)... Sep 3 23:24:04.211628 systemd[1]: Reloading... Sep 3 23:24:04.424739 zram_generator::config[3446]: No configuration found. Sep 3 23:24:04.610038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:24:04.899044 systemd[1]: Reloading finished in 686 ms. Sep 3 23:24:04.966863 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:04.982284 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:24:04.982940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:04.983135 systemd[1]: kubelet.service: Consumed 2.135s CPU time, 127.6M memory peak. Sep 3 23:24:04.987912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:05.387180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:05.406318 (kubelet)[3503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:24:05.499471 kubelet[3503]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:05.499471 kubelet[3503]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:24:05.499471 kubelet[3503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:05.500048 kubelet[3503]: I0903 23:24:05.499575 3503 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:24:05.512465 kubelet[3503]: I0903 23:24:05.512402 3503 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:24:05.512465 kubelet[3503]: I0903 23:24:05.512449 3503 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:24:05.513503 kubelet[3503]: I0903 23:24:05.513051 3503 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:24:05.520404 kubelet[3503]: I0903 23:24:05.519442 3503 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 3 23:24:05.526649 kubelet[3503]: I0903 23:24:05.526374 3503 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:24:05.545682 kubelet[3503]: I0903 23:24:05.545620 3503 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:24:05.550882 kubelet[3503]: I0903 23:24:05.550814 3503 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:24:05.552793 kubelet[3503]: I0903 23:24:05.551772 3503 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:24:05.552793 kubelet[3503]: I0903 23:24:05.552058 3503 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:24:05.552793 kubelet[3503]: I0903 23:24:05.552100 3503 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-182","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:24:05.552793 kubelet[3503]: I0903 23:24:05.552385 3503 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:24:05.553146 kubelet[3503]: I0903 23:24:05.552404 3503 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:24:05.553146 kubelet[3503]: I0903 23:24:05.552467 3503 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:05.554538 kubelet[3503]: I0903 23:24:05.554505 3503 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:24:05.554778 kubelet[3503]: I0903 23:24:05.554755 3503 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:24:05.555614 kubelet[3503]: I0903 23:24:05.554935 3503 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:24:05.555614 kubelet[3503]: I0903 23:24:05.554980 3503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:24:05.559104 kubelet[3503]: I0903 23:24:05.559068 3503 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:24:05.564744 kubelet[3503]: I0903 23:24:05.564199 3503 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:24:05.566815 kubelet[3503]: I0903 23:24:05.566782 3503 server.go:1274] "Started kubelet" Sep 3 23:24:05.575108 kubelet[3503]: I0903 23:24:05.575073 3503 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:24:05.577151 sudo[3517]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 3 23:24:05.577880 sudo[3517]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 3 23:24:05.588120 kubelet[3503]: I0903 23:24:05.588051 3503 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:24:05.605334 kubelet[3503]: I0903 23:24:05.605105 3503 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:24:05.632714 kubelet[3503]: I0903 23:24:05.630943 3503 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:24:05.646127 kubelet[3503]: I0903 23:24:05.646017 3503 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:24:05.647770 kubelet[3503]: E0903 23:24:05.647728 3503 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-182\" not found" Sep 3 23:24:05.649117 kubelet[3503]: I0903 23:24:05.649086 3503 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:24:05.649497 kubelet[3503]: I0903 23:24:05.649478 3503 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:24:05.652077 kubelet[3503]: I0903 23:24:05.652040 3503 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:24:05.676252 kubelet[3503]: I0903 23:24:05.655187 3503 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:24:05.695570 kubelet[3503]: I0903 23:24:05.695483 3503 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:24:05.709875 kubelet[3503]: I0903 23:24:05.709841 3503 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:24:05.711713 kubelet[3503]: I0903 23:24:05.710049 3503 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:24:05.711713 kubelet[3503]: I0903 23:24:05.710204 3503 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:24:05.736811 kubelet[3503]: E0903 23:24:05.736768 3503 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:24:05.743136 kubelet[3503]: I0903 23:24:05.743092 3503 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:24:05.743569 kubelet[3503]: I0903 23:24:05.743548 3503 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:24:05.744356 kubelet[3503]: I0903 23:24:05.744327 3503 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:24:05.744575 kubelet[3503]: E0903 23:24:05.744542 3503 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:24:05.846372 kubelet[3503]: E0903 23:24:05.846309 3503 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 3 23:24:05.903030 kubelet[3503]: I0903 23:24:05.902104 3503 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:24:05.903030 kubelet[3503]: I0903 23:24:05.902134 3503 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:24:05.903030 kubelet[3503]: I0903 23:24:05.902167 3503 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:05.903030 kubelet[3503]: I0903 23:24:05.902409 3503 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:24:05.903030 kubelet[3503]: I0903 23:24:05.902431 3503 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:24:05.903030 kubelet[3503]: I0903 23:24:05.902465 3503 policy_none.go:49] "None policy: Start" Sep 3 23:24:05.905981 kubelet[3503]: I0903 23:24:05.905476 3503 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:24:05.905981 kubelet[3503]: I0903 23:24:05.905523 3503 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:24:05.907167 kubelet[3503]: I0903 23:24:05.906922 3503 state_mem.go:75] "Updated machine memory state" Sep 3 23:24:05.919151 kubelet[3503]: I0903 23:24:05.919116 3503 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:24:05.919596 kubelet[3503]: I0903 23:24:05.919574 3503 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:24:05.920719 kubelet[3503]: I0903 23:24:05.920121 3503 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:24:05.920719 kubelet[3503]: I0903 23:24:05.920636 3503 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:24:06.052801 kubelet[3503]: I0903 23:24:06.052552 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:24:06.053824 kubelet[3503]: I0903 23:24:06.053780 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:24:06.056772 kubelet[3503]: I0903 23:24:06.056703 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:24:06.057527 kubelet[3503]: I0903 23:24:06.057178 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deb25ed6f452b689815fcdf9281252a1-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-182\" (UID: \"deb25ed6f452b689815fcdf9281252a1\") " pod="kube-system/kube-scheduler-ip-172-31-18-182" Sep 3 23:24:06.057527 kubelet[3503]: I0903 23:24:06.057236 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3838b41b307dad59fa4f8f369adc4063-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-182\" (UID: \"3838b41b307dad59fa4f8f369adc4063\") " pod="kube-system/kube-apiserver-ip-172-31-18-182" Sep 3 23:24:06.057527 kubelet[3503]: I0903 23:24:06.057276 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3838b41b307dad59fa4f8f369adc4063-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-182\" (UID: \"3838b41b307dad59fa4f8f369adc4063\") " pod="kube-system/kube-apiserver-ip-172-31-18-182" Sep 3 23:24:06.057527 kubelet[3503]: I0903 23:24:06.057316 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:24:06.057527 kubelet[3503]: I0903 23:24:06.057352 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3838b41b307dad59fa4f8f369adc4063-ca-certs\") pod \"kube-apiserver-ip-172-31-18-182\" (UID: \"3838b41b307dad59fa4f8f369adc4063\") " pod="kube-system/kube-apiserver-ip-172-31-18-182" Sep 3 23:24:06.057883 kubelet[3503]: I0903 23:24:06.057390 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/291e1e8f01b6da22f8885c9ff2bfcb4a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-182\" (UID: \"291e1e8f01b6da22f8885c9ff2bfcb4a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-182" Sep 3 23:24:06.059354 kubelet[3503]: I0903 23:24:06.059220 3503 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-182" Sep 3 23:24:06.092282 kubelet[3503]: I0903 23:24:06.092215 3503 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-182" Sep 3 23:24:06.092775 kubelet[3503]: I0903 23:24:06.092647 3503 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-182" Sep 3 23:24:06.518420 sudo[3517]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:06.577754 kubelet[3503]: I0903 23:24:06.577675 3503 apiserver.go:52] "Watching apiserver" Sep 3 23:24:06.650071 kubelet[3503]: I0903 23:24:06.649984 3503 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:24:06.839251 kubelet[3503]: E0903 23:24:06.838762 3503 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-182\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-182" Sep 3 23:24:06.874333 kubelet[3503]: I0903 23:24:06.874207 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-182" podStartSLOduration=0.874166998 podStartE2EDuration="874.166998ms" podCreationTimestamp="2025-09-03 23:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:06.86864119 +0000 UTC m=+1.454579540" watchObservedRunningTime="2025-09-03 23:24:06.874166998 +0000 UTC m=+1.460105324" Sep 3 23:24:06.890686 kubelet[3503]: I0903 23:24:06.890608 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-182" podStartSLOduration=0.890564614 podStartE2EDuration="890.564614ms" podCreationTimestamp="2025-09-03 23:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:06.88856911 +0000 UTC m=+1.474507448" watchObservedRunningTime="2025-09-03 23:24:06.890564614 +0000 UTC m=+1.476502940" Sep 3 23:24:06.933460 kubelet[3503]: I0903 23:24:06.933196 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-182" podStartSLOduration=0.933177275 podStartE2EDuration="933.177275ms" podCreationTimestamp="2025-09-03 23:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:06.912457355 +0000 UTC m=+1.498395693" watchObservedRunningTime="2025-09-03 23:24:06.933177275 +0000 UTC m=+1.519115589" Sep 3 23:24:08.628090 sudo[2286]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:08.652022 sshd[2285]: Connection closed by 139.178.89.65 port 36868 Sep 3 23:24:08.652885 sshd-session[2283]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:08.661424 systemd[1]: sshd@8-172.31.18.182:22-139.178.89.65:36868.service: Deactivated successfully. Sep 3 23:24:08.668126 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:24:08.669844 systemd[1]: session-9.scope: Consumed 10.521s CPU time, 270.5M memory peak. Sep 3 23:24:08.673109 systemd-logind[1891]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:24:08.677176 systemd-logind[1891]: Removed session 9. Sep 3 23:24:09.822033 kubelet[3503]: I0903 23:24:09.821984 3503 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:24:09.822943 containerd[1916]: time="2025-09-03T23:24:09.822425617Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:24:09.823794 kubelet[3503]: I0903 23:24:09.823645 3503 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:24:10.694385 systemd[1]: Created slice kubepods-besteffort-pod6be4f95b_9b4e_450a_a2c4_090fd855e56f.slice - libcontainer container kubepods-besteffort-pod6be4f95b_9b4e_450a_a2c4_090fd855e56f.slice. Sep 3 23:24:10.745120 systemd[1]: Created slice kubepods-burstable-pod7c0ffc4e_96cd_44d5_8f74_18d21628d404.slice - libcontainer container kubepods-burstable-pod7c0ffc4e_96cd_44d5_8f74_18d21628d404.slice. Sep 3 23:24:10.786317 kubelet[3503]: I0903 23:24:10.786258 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-cgroup\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.786317 kubelet[3503]: I0903 23:24:10.786336 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-config-path\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.786924 kubelet[3503]: I0903 23:24:10.786377 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-lib-modules\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.786924 kubelet[3503]: I0903 23:24:10.786420 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-xtables-lock\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.786924 kubelet[3503]: I0903 23:24:10.786475 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hostproc\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.786924 kubelet[3503]: I0903 23:24:10.786511 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4pm\" (UniqueName: \"kubernetes.io/projected/6be4f95b-9b4e-450a-a2c4-090fd855e56f-kube-api-access-wj4pm\") pod \"kube-proxy-7fghz\" (UID: \"6be4f95b-9b4e-450a-a2c4-090fd855e56f\") " pod="kube-system/kube-proxy-7fghz" Sep 3 23:24:10.786924 kubelet[3503]: I0903 23:24:10.786550 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-net\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.786924 kubelet[3503]: I0903 23:24:10.786587 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6be4f95b-9b4e-450a-a2c4-090fd855e56f-lib-modules\") pod \"kube-proxy-7fghz\" (UID: \"6be4f95b-9b4e-450a-a2c4-090fd855e56f\") " pod="kube-system/kube-proxy-7fghz" Sep 3 23:24:10.787363 kubelet[3503]: I0903 23:24:10.786619 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-run\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.787363 kubelet[3503]: I0903 23:24:10.786653 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-bpf-maps\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.787363 kubelet[3503]: I0903 23:24:10.786719 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cni-path\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.787363 kubelet[3503]: I0903 23:24:10.786773 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-etc-cni-netd\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.787363 kubelet[3503]: I0903 23:24:10.786812 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6be4f95b-9b4e-450a-a2c4-090fd855e56f-xtables-lock\") pod \"kube-proxy-7fghz\" (UID: \"6be4f95b-9b4e-450a-a2c4-090fd855e56f\") " pod="kube-system/kube-proxy-7fghz" Sep 3 23:24:10.787363 kubelet[3503]: I0903 23:24:10.786850 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-kernel\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.787911 kubelet[3503]: I0903 23:24:10.786892 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwst\" (UniqueName: \"kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-kube-api-access-xtwst\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.787911 kubelet[3503]: I0903 23:24:10.786939 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6be4f95b-9b4e-450a-a2c4-090fd855e56f-kube-proxy\") pod \"kube-proxy-7fghz\" (UID: \"6be4f95b-9b4e-450a-a2c4-090fd855e56f\") " pod="kube-system/kube-proxy-7fghz" Sep 3 23:24:10.787911 kubelet[3503]: I0903 23:24:10.786975 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0ffc4e-96cd-44d5-8f74-18d21628d404-clustermesh-secrets\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.787911 kubelet[3503]: I0903 23:24:10.787008 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hubble-tls\") pod \"cilium-9tv7r\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " pod="kube-system/cilium-9tv7r" Sep 3 23:24:10.939646 systemd[1]: Created slice kubepods-besteffort-pod985e0fb2_dd67_41eb_a86d_46b6ce869cca.slice - libcontainer container kubepods-besteffort-pod985e0fb2_dd67_41eb_a86d_46b6ce869cca.slice. Sep 3 23:24:10.989795 kubelet[3503]: I0903 23:24:10.988020 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/985e0fb2-dd67-41eb-a86d-46b6ce869cca-cilium-config-path\") pod \"cilium-operator-5d85765b45-2r49j\" (UID: \"985e0fb2-dd67-41eb-a86d-46b6ce869cca\") " pod="kube-system/cilium-operator-5d85765b45-2r49j" Sep 3 23:24:10.989795 kubelet[3503]: I0903 23:24:10.988085 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvlf\" (UniqueName: \"kubernetes.io/projected/985e0fb2-dd67-41eb-a86d-46b6ce869cca-kube-api-access-khvlf\") pod \"cilium-operator-5d85765b45-2r49j\" (UID: \"985e0fb2-dd67-41eb-a86d-46b6ce869cca\") " pod="kube-system/cilium-operator-5d85765b45-2r49j" Sep 3 23:24:11.008068 containerd[1916]: time="2025-09-03T23:24:11.008000027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fghz,Uid:6be4f95b-9b4e-450a-a2c4-090fd855e56f,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:11.047975 containerd[1916]: time="2025-09-03T23:24:11.047818763Z" level=info msg="connecting to shim 287f7947287eb58d549f7e440dd74169939668d63888246726f70031d6753ddf" address="unix:///run/containerd/s/a28f0e53277bf02b79a4ad24b7f143cb8efc83d731a5c00fc272b8dc412d3d07" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:11.052364 containerd[1916]: time="2025-09-03T23:24:11.052313087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tv7r,Uid:7c0ffc4e-96cd-44d5-8f74-18d21628d404,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:11.100197 containerd[1916]: time="2025-09-03T23:24:11.100101443Z" level=info msg="connecting to shim 379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf" address="unix:///run/containerd/s/9db60021323e576622b86440e4f99b2309f151d30f6474912ff72c6352d81559" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:11.107988 systemd[1]: Started cri-containerd-287f7947287eb58d549f7e440dd74169939668d63888246726f70031d6753ddf.scope - libcontainer container 287f7947287eb58d549f7e440dd74169939668d63888246726f70031d6753ddf. Sep 3 23:24:11.175097 systemd[1]: Started cri-containerd-379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf.scope - libcontainer container 379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf. Sep 3 23:24:11.204912 containerd[1916]: time="2025-09-03T23:24:11.204860700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fghz,Uid:6be4f95b-9b4e-450a-a2c4-090fd855e56f,Namespace:kube-system,Attempt:0,} returns sandbox id \"287f7947287eb58d549f7e440dd74169939668d63888246726f70031d6753ddf\"" Sep 3 23:24:11.221843 containerd[1916]: time="2025-09-03T23:24:11.221474952Z" level=info msg="CreateContainer within sandbox \"287f7947287eb58d549f7e440dd74169939668d63888246726f70031d6753ddf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:24:11.251892 containerd[1916]: time="2025-09-03T23:24:11.251675640Z" level=info msg="Container 5e6d907d8521d340697c43c2d536943d2179378ce2d87c04d28cd27764cfb6cd: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:11.258717 containerd[1916]: time="2025-09-03T23:24:11.258643380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2r49j,Uid:985e0fb2-dd67-41eb-a86d-46b6ce869cca,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:11.259818 containerd[1916]: time="2025-09-03T23:24:11.259740912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9tv7r,Uid:7c0ffc4e-96cd-44d5-8f74-18d21628d404,Namespace:kube-system,Attempt:0,} returns sandbox id \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\"" Sep 3 23:24:11.266428 containerd[1916]: time="2025-09-03T23:24:11.266382972Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 3 23:24:11.279427 containerd[1916]: time="2025-09-03T23:24:11.279303996Z" level=info msg="CreateContainer within sandbox \"287f7947287eb58d549f7e440dd74169939668d63888246726f70031d6753ddf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e6d907d8521d340697c43c2d536943d2179378ce2d87c04d28cd27764cfb6cd\"" Sep 3 23:24:11.283743 containerd[1916]: time="2025-09-03T23:24:11.283026096Z" level=info msg="StartContainer for \"5e6d907d8521d340697c43c2d536943d2179378ce2d87c04d28cd27764cfb6cd\"" Sep 3 23:24:11.287262 containerd[1916]: time="2025-09-03T23:24:11.287213352Z" level=info msg="connecting to shim 5e6d907d8521d340697c43c2d536943d2179378ce2d87c04d28cd27764cfb6cd" address="unix:///run/containerd/s/a28f0e53277bf02b79a4ad24b7f143cb8efc83d731a5c00fc272b8dc412d3d07" protocol=ttrpc version=3 Sep 3 23:24:11.329173 systemd[1]: Started cri-containerd-5e6d907d8521d340697c43c2d536943d2179378ce2d87c04d28cd27764cfb6cd.scope - libcontainer container 5e6d907d8521d340697c43c2d536943d2179378ce2d87c04d28cd27764cfb6cd. Sep 3 23:24:11.329951 containerd[1916]: time="2025-09-03T23:24:11.329853732Z" level=info msg="connecting to shim 4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e" address="unix:///run/containerd/s/74f8add1f6c0433a65f390eb998a1339c654dff87bd2376c77a0e5d126e0526d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:11.384966 systemd[1]: Started cri-containerd-4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e.scope - libcontainer container 4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e. Sep 3 23:24:11.470677 containerd[1916]: time="2025-09-03T23:24:11.470478385Z" level=info msg="StartContainer for \"5e6d907d8521d340697c43c2d536943d2179378ce2d87c04d28cd27764cfb6cd\" returns successfully" Sep 3 23:24:11.514576 containerd[1916]: time="2025-09-03T23:24:11.514342465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2r49j,Uid:985e0fb2-dd67-41eb-a86d-46b6ce869cca,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\"" Sep 3 23:24:12.566756 kubelet[3503]: I0903 23:24:12.566339 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7fghz" podStartSLOduration=2.566285031 podStartE2EDuration="2.566285031s" podCreationTimestamp="2025-09-03 23:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:11.880957935 +0000 UTC m=+6.466896273" watchObservedRunningTime="2025-09-03 23:24:12.566285031 +0000 UTC m=+7.152223345" Sep 3 23:24:19.062540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449360113.mount: Deactivated successfully. Sep 3 23:24:21.858865 containerd[1916]: time="2025-09-03T23:24:21.858795265Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:21.861292 containerd[1916]: time="2025-09-03T23:24:21.861213325Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 3 23:24:21.863445 containerd[1916]: time="2025-09-03T23:24:21.863310217Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:21.867541 containerd[1916]: time="2025-09-03T23:24:21.866605369Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.600055117s" Sep 3 23:24:21.867541 containerd[1916]: time="2025-09-03T23:24:21.866729977Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 3 23:24:21.871023 containerd[1916]: time="2025-09-03T23:24:21.870970297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 3 23:24:21.875492 containerd[1916]: time="2025-09-03T23:24:21.875402713Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:24:21.893924 containerd[1916]: time="2025-09-03T23:24:21.893861389Z" level=info msg="Container b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:21.910986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273892655.mount: Deactivated successfully. Sep 3 23:24:21.913353 containerd[1916]: time="2025-09-03T23:24:21.913280065Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\"" Sep 3 23:24:21.915510 containerd[1916]: time="2025-09-03T23:24:21.915202285Z" level=info msg="StartContainer for \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\"" Sep 3 23:24:21.917626 containerd[1916]: time="2025-09-03T23:24:21.917338429Z" level=info msg="connecting to shim b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4" address="unix:///run/containerd/s/9db60021323e576622b86440e4f99b2309f151d30f6474912ff72c6352d81559" protocol=ttrpc version=3 Sep 3 23:24:21.960059 systemd[1]: Started cri-containerd-b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4.scope - libcontainer container b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4. Sep 3 23:24:22.039559 containerd[1916]: time="2025-09-03T23:24:22.039510622Z" level=info msg="StartContainer for \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" returns successfully" Sep 3 23:24:22.072820 systemd[1]: cri-containerd-b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4.scope: Deactivated successfully. Sep 3 23:24:22.081535 containerd[1916]: time="2025-09-03T23:24:22.081460114Z" level=info msg="received exit event container_id:\"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" id:\"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" pid:3921 exited_at:{seconds:1756941862 nanos:80745766}" Sep 3 23:24:22.081877 containerd[1916]: time="2025-09-03T23:24:22.081687610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" id:\"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" pid:3921 exited_at:{seconds:1756941862 nanos:80745766}" Sep 3 23:24:22.124407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4-rootfs.mount: Deactivated successfully. Sep 3 23:24:23.902783 containerd[1916]: time="2025-09-03T23:24:23.902101959Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:24:23.921318 containerd[1916]: time="2025-09-03T23:24:23.921233091Z" level=info msg="Container 3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:23.938609 containerd[1916]: time="2025-09-03T23:24:23.938492379Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\"" Sep 3 23:24:23.944314 containerd[1916]: time="2025-09-03T23:24:23.940961775Z" level=info msg="StartContainer for \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\"" Sep 3 23:24:23.948070 containerd[1916]: time="2025-09-03T23:24:23.947473071Z" level=info msg="connecting to shim 3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81" address="unix:///run/containerd/s/9db60021323e576622b86440e4f99b2309f151d30f6474912ff72c6352d81559" protocol=ttrpc version=3 Sep 3 23:24:23.996018 systemd[1]: Started cri-containerd-3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81.scope - libcontainer container 3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81. Sep 3 23:24:24.078309 containerd[1916]: time="2025-09-03T23:24:24.078170520Z" level=info msg="StartContainer for \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" returns successfully" Sep 3 23:24:24.104247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:24:24.104846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:24:24.106221 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:24:24.112038 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:24:24.119623 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:24:24.121531 systemd[1]: cri-containerd-3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81.scope: Deactivated successfully. Sep 3 23:24:24.123444 containerd[1916]: time="2025-09-03T23:24:24.123379632Z" level=info msg="received exit event container_id:\"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" id:\"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" pid:3967 exited_at:{seconds:1756941864 nanos:122960628}" Sep 3 23:24:24.126465 containerd[1916]: time="2025-09-03T23:24:24.126097704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" id:\"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" pid:3967 exited_at:{seconds:1756941864 nanos:122960628}" Sep 3 23:24:24.163788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:24:24.910626 containerd[1916]: time="2025-09-03T23:24:24.910556980Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:24:24.925300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81-rootfs.mount: Deactivated successfully. Sep 3 23:24:24.934023 containerd[1916]: time="2025-09-03T23:24:24.933948832Z" level=info msg="Container 77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:24.959159 containerd[1916]: time="2025-09-03T23:24:24.958430404Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\"" Sep 3 23:24:24.960669 containerd[1916]: time="2025-09-03T23:24:24.960601324Z" level=info msg="StartContainer for \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\"" Sep 3 23:24:24.965606 containerd[1916]: time="2025-09-03T23:24:24.965469976Z" level=info msg="connecting to shim 77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd" address="unix:///run/containerd/s/9db60021323e576622b86440e4f99b2309f151d30f6474912ff72c6352d81559" protocol=ttrpc version=3 Sep 3 23:24:25.014424 systemd[1]: Started cri-containerd-77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd.scope - libcontainer container 77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd. Sep 3 23:24:25.118088 containerd[1916]: time="2025-09-03T23:24:25.117902581Z" level=info msg="StartContainer for \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" returns successfully" Sep 3 23:24:25.126105 systemd[1]: cri-containerd-77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd.scope: Deactivated successfully. Sep 3 23:24:25.131057 containerd[1916]: time="2025-09-03T23:24:25.130017793Z" level=info msg="received exit event container_id:\"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" id:\"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" pid:4022 exited_at:{seconds:1756941865 nanos:129208237}" Sep 3 23:24:25.131680 containerd[1916]: time="2025-09-03T23:24:25.130665505Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" id:\"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" pid:4022 exited_at:{seconds:1756941865 nanos:129208237}" Sep 3 23:24:25.182954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd-rootfs.mount: Deactivated successfully. Sep 3 23:24:25.926604 containerd[1916]: time="2025-09-03T23:24:25.926326853Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:24:25.968864 containerd[1916]: time="2025-09-03T23:24:25.968653409Z" level=info msg="Container 8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:25.989481 containerd[1916]: time="2025-09-03T23:24:25.989351609Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\"" Sep 3 23:24:25.991840 containerd[1916]: time="2025-09-03T23:24:25.991667285Z" level=info msg="StartContainer for \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\"" Sep 3 23:24:25.996653 containerd[1916]: time="2025-09-03T23:24:25.994685225Z" level=info msg="connecting to shim 8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016" address="unix:///run/containerd/s/9db60021323e576622b86440e4f99b2309f151d30f6474912ff72c6352d81559" protocol=ttrpc version=3 Sep 3 23:24:26.075291 systemd[1]: Started cri-containerd-8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016.scope - libcontainer container 8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016. Sep 3 23:24:26.209642 systemd[1]: cri-containerd-8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016.scope: Deactivated successfully. Sep 3 23:24:26.219807 containerd[1916]: time="2025-09-03T23:24:26.219032294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" id:\"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" pid:4068 exited_at:{seconds:1756941866 nanos:215627870}" Sep 3 23:24:26.220322 containerd[1916]: time="2025-09-03T23:24:26.220261226Z" level=info msg="received exit event container_id:\"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" id:\"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" pid:4068 exited_at:{seconds:1756941866 nanos:215627870}" Sep 3 23:24:26.253801 containerd[1916]: time="2025-09-03T23:24:26.253512651Z" level=info msg="StartContainer for \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" returns successfully" Sep 3 23:24:26.298798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016-rootfs.mount: Deactivated successfully. Sep 3 23:24:26.602772 containerd[1916]: time="2025-09-03T23:24:26.602248684Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:26.605088 containerd[1916]: time="2025-09-03T23:24:26.604963996Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 3 23:24:26.607758 containerd[1916]: time="2025-09-03T23:24:26.607557916Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:26.610747 containerd[1916]: time="2025-09-03T23:24:26.610582756Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.739348867s" Sep 3 23:24:26.610747 containerd[1916]: time="2025-09-03T23:24:26.610653400Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 3 23:24:26.620351 containerd[1916]: time="2025-09-03T23:24:26.619660144Z" level=info msg="CreateContainer within sandbox \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 3 23:24:26.643968 containerd[1916]: time="2025-09-03T23:24:26.643854713Z" level=info msg="Container 02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:26.661481 containerd[1916]: time="2025-09-03T23:24:26.660946157Z" level=info msg="CreateContainer within sandbox \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\"" Sep 3 23:24:26.665424 containerd[1916]: time="2025-09-03T23:24:26.665239361Z" level=info msg="StartContainer for \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\"" Sep 3 23:24:26.670509 containerd[1916]: time="2025-09-03T23:24:26.670440809Z" level=info msg="connecting to shim 02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4" address="unix:///run/containerd/s/74f8add1f6c0433a65f390eb998a1339c654dff87bd2376c77a0e5d126e0526d" protocol=ttrpc version=3 Sep 3 23:24:26.710061 systemd[1]: Started cri-containerd-02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4.scope - libcontainer container 02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4. Sep 3 23:24:26.767132 containerd[1916]: time="2025-09-03T23:24:26.767049005Z" level=info msg="StartContainer for \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" returns successfully" Sep 3 23:24:26.946854 containerd[1916]: time="2025-09-03T23:24:26.946111122Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:24:26.975556 kubelet[3503]: I0903 23:24:26.975418 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2r49j" podStartSLOduration=1.879219375 podStartE2EDuration="16.975391014s" podCreationTimestamp="2025-09-03 23:24:10 +0000 UTC" firstStartedPulling="2025-09-03 23:24:11.517288537 +0000 UTC m=+6.103226851" lastFinishedPulling="2025-09-03 23:24:26.613460176 +0000 UTC m=+21.199398490" observedRunningTime="2025-09-03 23:24:26.97411863 +0000 UTC m=+21.560056932" watchObservedRunningTime="2025-09-03 23:24:26.975391014 +0000 UTC m=+21.561329424" Sep 3 23:24:26.997786 containerd[1916]: time="2025-09-03T23:24:26.996660150Z" level=info msg="Container 76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:26.999392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179135634.mount: Deactivated successfully. Sep 3 23:24:27.010957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140527289.mount: Deactivated successfully. Sep 3 23:24:27.042716 containerd[1916]: time="2025-09-03T23:24:27.042579735Z" level=info msg="CreateContainer within sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\"" Sep 3 23:24:27.045408 containerd[1916]: time="2025-09-03T23:24:27.044951475Z" level=info msg="StartContainer for \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\"" Sep 3 23:24:27.048230 containerd[1916]: time="2025-09-03T23:24:27.047850447Z" level=info msg="connecting to shim 76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819" address="unix:///run/containerd/s/9db60021323e576622b86440e4f99b2309f151d30f6474912ff72c6352d81559" protocol=ttrpc version=3 Sep 3 23:24:27.103527 systemd[1]: Started cri-containerd-76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819.scope - libcontainer container 76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819. Sep 3 23:24:27.264273 containerd[1916]: time="2025-09-03T23:24:27.263633332Z" level=info msg="StartContainer for \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" returns successfully" Sep 3 23:24:27.553942 containerd[1916]: time="2025-09-03T23:24:27.553028681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" id:\"368cdc8694a17318ae363ca720608e843c67b5d479b4ba5134074127a40665c9\" pid:4168 exited_at:{seconds:1756941867 nanos:552158573}" Sep 3 23:24:27.564259 kubelet[3503]: I0903 23:24:27.564201 3503 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 3 23:24:27.648178 systemd[1]: Created slice kubepods-burstable-pod3e743216_9015_4148_be9d_0e0331a318cb.slice - libcontainer container kubepods-burstable-pod3e743216_9015_4148_be9d_0e0331a318cb.slice. Sep 3 23:24:27.677559 systemd[1]: Created slice kubepods-burstable-pod9a57c3a3_f0e7_4d5a_a8e4_e0d80ec9f016.slice - libcontainer container kubepods-burstable-pod9a57c3a3_f0e7_4d5a_a8e4_e0d80ec9f016.slice. Sep 3 23:24:27.721478 kubelet[3503]: I0903 23:24:27.721392 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a57c3a3-f0e7-4d5a-a8e4-e0d80ec9f016-config-volume\") pod \"coredns-7c65d6cfc9-wgdfh\" (UID: \"9a57c3a3-f0e7-4d5a-a8e4-e0d80ec9f016\") " pod="kube-system/coredns-7c65d6cfc9-wgdfh" Sep 3 23:24:27.721478 kubelet[3503]: I0903 23:24:27.721479 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjlc4\" (UniqueName: \"kubernetes.io/projected/9a57c3a3-f0e7-4d5a-a8e4-e0d80ec9f016-kube-api-access-cjlc4\") pod \"coredns-7c65d6cfc9-wgdfh\" (UID: \"9a57c3a3-f0e7-4d5a-a8e4-e0d80ec9f016\") " pod="kube-system/coredns-7c65d6cfc9-wgdfh" Sep 3 23:24:27.721870 kubelet[3503]: I0903 23:24:27.721541 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e743216-9015-4148-be9d-0e0331a318cb-config-volume\") pod \"coredns-7c65d6cfc9-gz2x9\" (UID: \"3e743216-9015-4148-be9d-0e0331a318cb\") " pod="kube-system/coredns-7c65d6cfc9-gz2x9" Sep 3 23:24:27.722242 kubelet[3503]: I0903 23:24:27.722130 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzhl\" (UniqueName: \"kubernetes.io/projected/3e743216-9015-4148-be9d-0e0331a318cb-kube-api-access-rhzhl\") pod \"coredns-7c65d6cfc9-gz2x9\" (UID: \"3e743216-9015-4148-be9d-0e0331a318cb\") " pod="kube-system/coredns-7c65d6cfc9-gz2x9" Sep 3 23:24:27.968719 containerd[1916]: time="2025-09-03T23:24:27.967578403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gz2x9,Uid:3e743216-9015-4148-be9d-0e0331a318cb,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:27.997451 containerd[1916]: time="2025-09-03T23:24:27.997098367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wgdfh,Uid:9a57c3a3-f0e7-4d5a-a8e4-e0d80ec9f016,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:32.247049 (udev-worker)[4234]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:32.249955 systemd-networkd[1749]: cilium_host: Link UP Sep 3 23:24:32.250401 systemd-networkd[1749]: cilium_net: Link UP Sep 3 23:24:32.251428 (udev-worker)[4232]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:32.252672 systemd-networkd[1749]: cilium_net: Gained carrier Sep 3 23:24:32.255304 systemd-networkd[1749]: cilium_host: Gained carrier Sep 3 23:24:32.471228 systemd-networkd[1749]: cilium_vxlan: Link UP Sep 3 23:24:32.471248 systemd-networkd[1749]: cilium_vxlan: Gained carrier Sep 3 23:24:33.121848 kernel: NET: Registered PF_ALG protocol family Sep 3 23:24:33.253452 systemd-networkd[1749]: cilium_net: Gained IPv6LL Sep 3 23:24:33.316059 systemd-networkd[1749]: cilium_host: Gained IPv6LL Sep 3 23:24:34.211999 systemd-networkd[1749]: cilium_vxlan: Gained IPv6LL Sep 3 23:24:34.705969 systemd-networkd[1749]: lxc_health: Link UP Sep 3 23:24:34.721045 systemd-networkd[1749]: lxc_health: Gained carrier Sep 3 23:24:35.106754 kubelet[3503]: I0903 23:24:35.105238 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9tv7r" podStartSLOduration=14.499368506 podStartE2EDuration="25.105212747s" podCreationTimestamp="2025-09-03 23:24:10 +0000 UTC" firstStartedPulling="2025-09-03 23:24:11.262733952 +0000 UTC m=+5.848672278" lastFinishedPulling="2025-09-03 23:24:21.868578097 +0000 UTC m=+16.454516519" observedRunningTime="2025-09-03 23:24:28.178583584 +0000 UTC m=+22.764521922" watchObservedRunningTime="2025-09-03 23:24:35.105212747 +0000 UTC m=+29.691151061" Sep 3 23:24:35.149100 (udev-worker)[4278]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:35.153460 (udev-worker)[4277]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:35.165905 kernel: eth0: renamed from tmpa5954 Sep 3 23:24:35.167155 systemd-networkd[1749]: lxc01f49ad5baf4: Link UP Sep 3 23:24:35.169446 systemd-networkd[1749]: lxc999518e05a94: Link UP Sep 3 23:24:35.181891 kernel: eth0: renamed from tmpd044a Sep 3 23:24:35.183237 systemd-networkd[1749]: lxc999518e05a94: Gained carrier Sep 3 23:24:35.190966 systemd-networkd[1749]: lxc01f49ad5baf4: Gained carrier Sep 3 23:24:35.876433 systemd-networkd[1749]: lxc_health: Gained IPv6LL Sep 3 23:24:36.388171 systemd-networkd[1749]: lxc999518e05a94: Gained IPv6LL Sep 3 23:24:37.220067 systemd-networkd[1749]: lxc01f49ad5baf4: Gained IPv6LL Sep 3 23:24:40.012770 ntpd[1883]: Listen normally on 8 cilium_host 192.168.0.37:123 Sep 3 23:24:40.014870 ntpd[1883]: 3 Sep 23:24:40 ntpd[1883]: Listen normally on 8 cilium_host 192.168.0.37:123 Sep 3 23:24:40.014870 ntpd[1883]: 3 Sep 23:24:40 ntpd[1883]: Listen normally on 9 cilium_net [fe80::d021:5eff:fe46:ab3c%4]:123 Sep 3 23:24:40.014870 ntpd[1883]: 3 Sep 23:24:40 ntpd[1883]: Listen normally on 10 cilium_host [fe80::8c0f:6aff:fe3b:7fa8%5]:123 Sep 3 23:24:40.014870 ntpd[1883]: 3 Sep 23:24:40 ntpd[1883]: Listen normally on 11 cilium_vxlan [fe80::cb7:23ff:fe97:fd29%6]:123 Sep 3 23:24:40.014870 ntpd[1883]: 3 Sep 23:24:40 ntpd[1883]: Listen normally on 12 lxc_health [fe80::4084:ceff:fe15:35c4%8]:123 Sep 3 23:24:40.014870 ntpd[1883]: 3 Sep 23:24:40 ntpd[1883]: Listen normally on 13 lxc999518e05a94 [fe80::244e:93ff:feb0:6de0%10]:123 Sep 3 23:24:40.014870 ntpd[1883]: 3 Sep 23:24:40 ntpd[1883]: Listen normally on 14 lxc01f49ad5baf4 [fe80::dc6f:21ff:fe2c:b11e%12]:123 Sep 3 23:24:40.012892 ntpd[1883]: Listen normally on 9 cilium_net [fe80::d021:5eff:fe46:ab3c%4]:123 Sep 3 23:24:40.012970 ntpd[1883]: Listen normally on 10 cilium_host [fe80::8c0f:6aff:fe3b:7fa8%5]:123 Sep 3 23:24:40.013035 ntpd[1883]: Listen normally on 11 cilium_vxlan [fe80::cb7:23ff:fe97:fd29%6]:123 Sep 3 23:24:40.013098 ntpd[1883]: Listen normally on 12 lxc_health [fe80::4084:ceff:fe15:35c4%8]:123 Sep 3 23:24:40.013168 ntpd[1883]: Listen normally on 13 lxc999518e05a94 [fe80::244e:93ff:feb0:6de0%10]:123 Sep 3 23:24:40.013239 ntpd[1883]: Listen normally on 14 lxc01f49ad5baf4 [fe80::dc6f:21ff:fe2c:b11e%12]:123 Sep 3 23:24:44.161284 containerd[1916]: time="2025-09-03T23:24:44.160927616Z" level=info msg="connecting to shim d044a9245c7193cb365225bbe62ccb3ca86167d1140348f7244c791857d15b90" address="unix:///run/containerd/s/dc18a08e447a02b14b34ec45edef5666fb1e8ca909eef43482c63591e7d81731" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:44.167764 containerd[1916]: time="2025-09-03T23:24:44.167663564Z" level=info msg="connecting to shim a59545e1f811265a31f5eed69f44b5436a0a7d483e1f9591959e37df961aeb56" address="unix:///run/containerd/s/b039790d7d90fce2b3a4fde18de2447c666269f6df0f1be9c885aa85707bcbc3" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:44.261323 systemd[1]: Started cri-containerd-a59545e1f811265a31f5eed69f44b5436a0a7d483e1f9591959e37df961aeb56.scope - libcontainer container a59545e1f811265a31f5eed69f44b5436a0a7d483e1f9591959e37df961aeb56. Sep 3 23:24:44.278851 systemd[1]: Started cri-containerd-d044a9245c7193cb365225bbe62ccb3ca86167d1140348f7244c791857d15b90.scope - libcontainer container d044a9245c7193cb365225bbe62ccb3ca86167d1140348f7244c791857d15b90. Sep 3 23:24:44.398927 containerd[1916]: time="2025-09-03T23:24:44.398805105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gz2x9,Uid:3e743216-9015-4148-be9d-0e0331a318cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a59545e1f811265a31f5eed69f44b5436a0a7d483e1f9591959e37df961aeb56\"" Sep 3 23:24:44.409069 containerd[1916]: time="2025-09-03T23:24:44.408882453Z" level=info msg="CreateContainer within sandbox \"a59545e1f811265a31f5eed69f44b5436a0a7d483e1f9591959e37df961aeb56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:24:44.418635 containerd[1916]: time="2025-09-03T23:24:44.418477293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wgdfh,Uid:9a57c3a3-f0e7-4d5a-a8e4-e0d80ec9f016,Namespace:kube-system,Attempt:0,} returns sandbox id \"d044a9245c7193cb365225bbe62ccb3ca86167d1140348f7244c791857d15b90\"" Sep 3 23:24:44.424917 containerd[1916]: time="2025-09-03T23:24:44.424856949Z" level=info msg="CreateContainer within sandbox \"d044a9245c7193cb365225bbe62ccb3ca86167d1140348f7244c791857d15b90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:24:44.443678 containerd[1916]: time="2025-09-03T23:24:44.442964541Z" level=info msg="Container ffe01e376d3a8b6ff6b34548af53edb6f8e7d3db94702031389fac2628887ade: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:44.450832 containerd[1916]: time="2025-09-03T23:24:44.450661233Z" level=info msg="Container 9d3216e261fd3fd301293675365e27529f7e93677db2eccea7006326fd4d95a2: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:44.458613 containerd[1916]: time="2025-09-03T23:24:44.458556021Z" level=info msg="CreateContainer within sandbox \"a59545e1f811265a31f5eed69f44b5436a0a7d483e1f9591959e37df961aeb56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ffe01e376d3a8b6ff6b34548af53edb6f8e7d3db94702031389fac2628887ade\"" Sep 3 23:24:44.461054 containerd[1916]: time="2025-09-03T23:24:44.460195989Z" level=info msg="StartContainer for \"ffe01e376d3a8b6ff6b34548af53edb6f8e7d3db94702031389fac2628887ade\"" Sep 3 23:24:44.464436 containerd[1916]: time="2025-09-03T23:24:44.464356725Z" level=info msg="connecting to shim ffe01e376d3a8b6ff6b34548af53edb6f8e7d3db94702031389fac2628887ade" address="unix:///run/containerd/s/b039790d7d90fce2b3a4fde18de2447c666269f6df0f1be9c885aa85707bcbc3" protocol=ttrpc version=3 Sep 3 23:24:44.475560 containerd[1916]: time="2025-09-03T23:24:44.475486245Z" level=info msg="CreateContainer within sandbox \"d044a9245c7193cb365225bbe62ccb3ca86167d1140348f7244c791857d15b90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d3216e261fd3fd301293675365e27529f7e93677db2eccea7006326fd4d95a2\"" Sep 3 23:24:44.477647 containerd[1916]: time="2025-09-03T23:24:44.477125805Z" level=info msg="StartContainer for \"9d3216e261fd3fd301293675365e27529f7e93677db2eccea7006326fd4d95a2\"" Sep 3 23:24:44.479183 containerd[1916]: time="2025-09-03T23:24:44.479115501Z" level=info msg="connecting to shim 9d3216e261fd3fd301293675365e27529f7e93677db2eccea7006326fd4d95a2" address="unix:///run/containerd/s/dc18a08e447a02b14b34ec45edef5666fb1e8ca909eef43482c63591e7d81731" protocol=ttrpc version=3 Sep 3 23:24:44.511091 systemd[1]: Started cri-containerd-ffe01e376d3a8b6ff6b34548af53edb6f8e7d3db94702031389fac2628887ade.scope - libcontainer container ffe01e376d3a8b6ff6b34548af53edb6f8e7d3db94702031389fac2628887ade. Sep 3 23:24:44.552020 systemd[1]: Started cri-containerd-9d3216e261fd3fd301293675365e27529f7e93677db2eccea7006326fd4d95a2.scope - libcontainer container 9d3216e261fd3fd301293675365e27529f7e93677db2eccea7006326fd4d95a2. Sep 3 23:24:44.638126 containerd[1916]: time="2025-09-03T23:24:44.638078722Z" level=info msg="StartContainer for \"ffe01e376d3a8b6ff6b34548af53edb6f8e7d3db94702031389fac2628887ade\" returns successfully" Sep 3 23:24:44.661441 containerd[1916]: time="2025-09-03T23:24:44.661321594Z" level=info msg="StartContainer for \"9d3216e261fd3fd301293675365e27529f7e93677db2eccea7006326fd4d95a2\" returns successfully" Sep 3 23:24:45.055667 kubelet[3503]: I0903 23:24:45.055543 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gz2x9" podStartSLOduration=35.055522064 podStartE2EDuration="35.055522064s" podCreationTimestamp="2025-09-03 23:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:45.054488912 +0000 UTC m=+39.640427238" watchObservedRunningTime="2025-09-03 23:24:45.055522064 +0000 UTC m=+39.641460390" Sep 3 23:24:45.119464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1511149546.mount: Deactivated successfully. Sep 3 23:24:45.143471 kubelet[3503]: I0903 23:24:45.142626 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wgdfh" podStartSLOduration=35.142602716 podStartE2EDuration="35.142602716s" podCreationTimestamp="2025-09-03 23:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:45.139443104 +0000 UTC m=+39.725381430" watchObservedRunningTime="2025-09-03 23:24:45.142602716 +0000 UTC m=+39.728541018" Sep 3 23:24:57.046151 systemd[1]: Started sshd@9-172.31.18.182:22-139.178.89.65:33602.service - OpenSSH per-connection server daemon (139.178.89.65:33602). Sep 3 23:24:57.254745 sshd[4811]: Accepted publickey for core from 139.178.89.65 port 33602 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:57.257645 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:57.265686 systemd-logind[1891]: New session 10 of user core. Sep 3 23:24:57.271948 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:24:57.560467 sshd[4813]: Connection closed by 139.178.89.65 port 33602 Sep 3 23:24:57.559301 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:57.565070 systemd-logind[1891]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:24:57.566114 systemd[1]: sshd@9-172.31.18.182:22-139.178.89.65:33602.service: Deactivated successfully. Sep 3 23:24:57.569649 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:24:57.576214 systemd-logind[1891]: Removed session 10. Sep 3 23:25:02.600188 systemd[1]: Started sshd@10-172.31.18.182:22-139.178.89.65:43860.service - OpenSSH per-connection server daemon (139.178.89.65:43860). Sep 3 23:25:02.808715 sshd[4826]: Accepted publickey for core from 139.178.89.65 port 43860 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:02.811451 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:02.821831 systemd-logind[1891]: New session 11 of user core. Sep 3 23:25:02.835024 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:25:03.097705 sshd[4828]: Connection closed by 139.178.89.65 port 43860 Sep 3 23:25:03.098601 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:03.107432 systemd-logind[1891]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:25:03.108901 systemd[1]: sshd@10-172.31.18.182:22-139.178.89.65:43860.service: Deactivated successfully. Sep 3 23:25:03.114421 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:25:03.118779 systemd-logind[1891]: Removed session 11. Sep 3 23:25:08.137407 systemd[1]: Started sshd@11-172.31.18.182:22-139.178.89.65:43866.service - OpenSSH per-connection server daemon (139.178.89.65:43866). Sep 3 23:25:08.338364 sshd[4843]: Accepted publickey for core from 139.178.89.65 port 43866 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:08.341889 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:08.351289 systemd-logind[1891]: New session 12 of user core. Sep 3 23:25:08.365029 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:25:08.630194 sshd[4845]: Connection closed by 139.178.89.65 port 43866 Sep 3 23:25:08.631627 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:08.644646 systemd[1]: sshd@11-172.31.18.182:22-139.178.89.65:43866.service: Deactivated successfully. Sep 3 23:25:08.651786 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:25:08.658819 systemd-logind[1891]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:25:08.662150 systemd-logind[1891]: Removed session 12. Sep 3 23:25:13.673853 systemd[1]: Started sshd@12-172.31.18.182:22-139.178.89.65:59036.service - OpenSSH per-connection server daemon (139.178.89.65:59036). Sep 3 23:25:13.878185 sshd[4860]: Accepted publickey for core from 139.178.89.65 port 59036 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:13.880784 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:13.890889 systemd-logind[1891]: New session 13 of user core. Sep 3 23:25:13.901082 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:25:14.163081 sshd[4862]: Connection closed by 139.178.89.65 port 59036 Sep 3 23:25:14.164174 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:14.171544 systemd[1]: sshd@12-172.31.18.182:22-139.178.89.65:59036.service: Deactivated successfully. Sep 3 23:25:14.178483 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:25:14.181478 systemd-logind[1891]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:25:14.206454 systemd-logind[1891]: Removed session 13. Sep 3 23:25:14.207326 systemd[1]: Started sshd@13-172.31.18.182:22-139.178.89.65:59044.service - OpenSSH per-connection server daemon (139.178.89.65:59044). Sep 3 23:25:14.408972 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 59044 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:14.411624 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:14.421787 systemd-logind[1891]: New session 14 of user core. Sep 3 23:25:14.430040 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:25:14.772669 sshd[4878]: Connection closed by 139.178.89.65 port 59044 Sep 3 23:25:14.775001 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:14.786544 systemd[1]: sshd@13-172.31.18.182:22-139.178.89.65:59044.service: Deactivated successfully. Sep 3 23:25:14.794288 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:25:14.800853 systemd-logind[1891]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:25:14.825266 systemd[1]: Started sshd@14-172.31.18.182:22-139.178.89.65:59056.service - OpenSSH per-connection server daemon (139.178.89.65:59056). Sep 3 23:25:14.830571 systemd-logind[1891]: Removed session 14. Sep 3 23:25:15.044632 sshd[4888]: Accepted publickey for core from 139.178.89.65 port 59056 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:15.046955 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:15.056264 systemd-logind[1891]: New session 15 of user core. Sep 3 23:25:15.067071 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:25:15.325332 sshd[4890]: Connection closed by 139.178.89.65 port 59056 Sep 3 23:25:15.326502 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:15.336066 systemd[1]: sshd@14-172.31.18.182:22-139.178.89.65:59056.service: Deactivated successfully. Sep 3 23:25:15.341284 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:25:15.343452 systemd-logind[1891]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:25:15.348924 systemd-logind[1891]: Removed session 15. Sep 3 23:25:20.376817 systemd[1]: Started sshd@15-172.31.18.182:22-139.178.89.65:34622.service - OpenSSH per-connection server daemon (139.178.89.65:34622). Sep 3 23:25:20.581630 sshd[4904]: Accepted publickey for core from 139.178.89.65 port 34622 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:20.584533 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:20.593870 systemd-logind[1891]: New session 16 of user core. Sep 3 23:25:20.609057 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:25:20.871421 sshd[4906]: Connection closed by 139.178.89.65 port 34622 Sep 3 23:25:20.872320 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:20.880853 systemd[1]: sshd@15-172.31.18.182:22-139.178.89.65:34622.service: Deactivated successfully. Sep 3 23:25:20.885247 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:25:20.888211 systemd-logind[1891]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:25:20.892771 systemd-logind[1891]: Removed session 16. Sep 3 23:25:25.910282 systemd[1]: Started sshd@16-172.31.18.182:22-139.178.89.65:34624.service - OpenSSH per-connection server daemon (139.178.89.65:34624). Sep 3 23:25:26.132789 sshd[4919]: Accepted publickey for core from 139.178.89.65 port 34624 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:26.136181 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:26.146814 systemd-logind[1891]: New session 17 of user core. Sep 3 23:25:26.155036 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:25:26.413919 sshd[4921]: Connection closed by 139.178.89.65 port 34624 Sep 3 23:25:26.413778 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:26.419651 systemd[1]: sshd@16-172.31.18.182:22-139.178.89.65:34624.service: Deactivated successfully. Sep 3 23:25:26.423727 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:25:26.430801 systemd-logind[1891]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:25:26.433485 systemd-logind[1891]: Removed session 17. Sep 3 23:25:31.465168 systemd[1]: Started sshd@17-172.31.18.182:22-139.178.89.65:43494.service - OpenSSH per-connection server daemon (139.178.89.65:43494). Sep 3 23:25:31.669007 sshd[4933]: Accepted publickey for core from 139.178.89.65 port 43494 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:31.671581 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:31.680985 systemd-logind[1891]: New session 18 of user core. Sep 3 23:25:31.688076 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:25:31.928871 sshd[4935]: Connection closed by 139.178.89.65 port 43494 Sep 3 23:25:31.930722 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:31.937042 systemd[1]: sshd@17-172.31.18.182:22-139.178.89.65:43494.service: Deactivated successfully. Sep 3 23:25:31.942946 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:25:31.946024 systemd-logind[1891]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:25:31.948837 systemd-logind[1891]: Removed session 18. Sep 3 23:25:31.968285 systemd[1]: Started sshd@18-172.31.18.182:22-139.178.89.65:43510.service - OpenSSH per-connection server daemon (139.178.89.65:43510). Sep 3 23:25:32.180177 sshd[4947]: Accepted publickey for core from 139.178.89.65 port 43510 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:32.182915 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:32.191158 systemd-logind[1891]: New session 19 of user core. Sep 3 23:25:32.200974 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:25:32.530162 sshd[4949]: Connection closed by 139.178.89.65 port 43510 Sep 3 23:25:32.531805 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:32.538139 systemd-logind[1891]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:25:32.540285 systemd[1]: sshd@18-172.31.18.182:22-139.178.89.65:43510.service: Deactivated successfully. Sep 3 23:25:32.546489 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:25:32.550988 systemd-logind[1891]: Removed session 19. Sep 3 23:25:32.569236 systemd[1]: Started sshd@19-172.31.18.182:22-139.178.89.65:43518.service - OpenSSH per-connection server daemon (139.178.89.65:43518). Sep 3 23:25:32.759295 sshd[4959]: Accepted publickey for core from 139.178.89.65 port 43518 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:32.761609 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:32.770175 systemd-logind[1891]: New session 20 of user core. Sep 3 23:25:32.777964 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:25:35.188138 sshd[4961]: Connection closed by 139.178.89.65 port 43518 Sep 3 23:25:35.189486 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:35.202986 systemd[1]: sshd@19-172.31.18.182:22-139.178.89.65:43518.service: Deactivated successfully. Sep 3 23:25:35.210883 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:25:35.216592 systemd-logind[1891]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:25:35.242485 systemd[1]: Started sshd@20-172.31.18.182:22-139.178.89.65:43520.service - OpenSSH per-connection server daemon (139.178.89.65:43520). Sep 3 23:25:35.246570 systemd-logind[1891]: Removed session 20. Sep 3 23:25:35.444870 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 43520 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:35.447602 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:35.456567 systemd-logind[1891]: New session 21 of user core. Sep 3 23:25:35.465987 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 3 23:25:36.012428 sshd[4980]: Connection closed by 139.178.89.65 port 43520 Sep 3 23:25:36.011853 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:36.023598 systemd[1]: sshd@20-172.31.18.182:22-139.178.89.65:43520.service: Deactivated successfully. Sep 3 23:25:36.028173 systemd[1]: session-21.scope: Deactivated successfully. Sep 3 23:25:36.031498 systemd-logind[1891]: Session 21 logged out. Waiting for processes to exit. Sep 3 23:25:36.053824 systemd[1]: Started sshd@21-172.31.18.182:22-139.178.89.65:43528.service - OpenSSH per-connection server daemon (139.178.89.65:43528). Sep 3 23:25:36.056879 systemd-logind[1891]: Removed session 21. Sep 3 23:25:36.259917 sshd[4990]: Accepted publickey for core from 139.178.89.65 port 43528 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:36.262483 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:36.271376 systemd-logind[1891]: New session 22 of user core. Sep 3 23:25:36.280051 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 3 23:25:36.521817 sshd[4992]: Connection closed by 139.178.89.65 port 43528 Sep 3 23:25:36.522460 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:36.534468 systemd[1]: sshd@21-172.31.18.182:22-139.178.89.65:43528.service: Deactivated successfully. Sep 3 23:25:36.540238 systemd[1]: session-22.scope: Deactivated successfully. Sep 3 23:25:36.546050 systemd-logind[1891]: Session 22 logged out. Waiting for processes to exit. Sep 3 23:25:36.551008 systemd-logind[1891]: Removed session 22. Sep 3 23:25:41.565197 systemd[1]: Started sshd@22-172.31.18.182:22-139.178.89.65:41094.service - OpenSSH per-connection server daemon (139.178.89.65:41094). Sep 3 23:25:41.774165 sshd[5004]: Accepted publickey for core from 139.178.89.65 port 41094 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:41.776953 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:41.789572 systemd-logind[1891]: New session 23 of user core. Sep 3 23:25:41.796086 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 3 23:25:42.052535 sshd[5011]: Connection closed by 139.178.89.65 port 41094 Sep 3 23:25:42.053367 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:42.061625 systemd[1]: sshd@22-172.31.18.182:22-139.178.89.65:41094.service: Deactivated successfully. Sep 3 23:25:42.067318 systemd[1]: session-23.scope: Deactivated successfully. Sep 3 23:25:42.069858 systemd-logind[1891]: Session 23 logged out. Waiting for processes to exit. Sep 3 23:25:42.074138 systemd-logind[1891]: Removed session 23. Sep 3 23:25:47.095427 systemd[1]: Started sshd@23-172.31.18.182:22-139.178.89.65:41102.service - OpenSSH per-connection server daemon (139.178.89.65:41102). Sep 3 23:25:47.296992 sshd[5024]: Accepted publickey for core from 139.178.89.65 port 41102 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:47.299548 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:47.307664 systemd-logind[1891]: New session 24 of user core. Sep 3 23:25:47.319175 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 3 23:25:47.559756 sshd[5026]: Connection closed by 139.178.89.65 port 41102 Sep 3 23:25:47.560568 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:47.567438 systemd[1]: sshd@23-172.31.18.182:22-139.178.89.65:41102.service: Deactivated successfully. Sep 3 23:25:47.574236 systemd[1]: session-24.scope: Deactivated successfully. Sep 3 23:25:47.578340 systemd-logind[1891]: Session 24 logged out. Waiting for processes to exit. Sep 3 23:25:47.581395 systemd-logind[1891]: Removed session 24. Sep 3 23:25:52.603168 systemd[1]: Started sshd@24-172.31.18.182:22-139.178.89.65:33308.service - OpenSSH per-connection server daemon (139.178.89.65:33308). Sep 3 23:25:52.800512 sshd[5038]: Accepted publickey for core from 139.178.89.65 port 33308 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:52.803393 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:52.819396 systemd-logind[1891]: New session 25 of user core. Sep 3 23:25:52.825993 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 3 23:25:53.068519 sshd[5040]: Connection closed by 139.178.89.65 port 33308 Sep 3 23:25:53.068242 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:53.076927 systemd[1]: sshd@24-172.31.18.182:22-139.178.89.65:33308.service: Deactivated successfully. Sep 3 23:25:53.081143 systemd[1]: session-25.scope: Deactivated successfully. Sep 3 23:25:53.084911 systemd-logind[1891]: Session 25 logged out. Waiting for processes to exit. Sep 3 23:25:53.087480 systemd-logind[1891]: Removed session 25. Sep 3 23:25:58.113319 systemd[1]: Started sshd@25-172.31.18.182:22-139.178.89.65:33322.service - OpenSSH per-connection server daemon (139.178.89.65:33322). Sep 3 23:25:58.314385 sshd[5052]: Accepted publickey for core from 139.178.89.65 port 33322 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:58.316948 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:58.324894 systemd-logind[1891]: New session 26 of user core. Sep 3 23:25:58.337006 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 3 23:25:58.578441 sshd[5054]: Connection closed by 139.178.89.65 port 33322 Sep 3 23:25:58.579279 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:58.586357 systemd[1]: sshd@25-172.31.18.182:22-139.178.89.65:33322.service: Deactivated successfully. Sep 3 23:25:58.592917 systemd[1]: session-26.scope: Deactivated successfully. Sep 3 23:25:58.596288 systemd-logind[1891]: Session 26 logged out. Waiting for processes to exit. Sep 3 23:25:58.615677 systemd-logind[1891]: Removed session 26. Sep 3 23:25:58.618162 systemd[1]: Started sshd@26-172.31.18.182:22-139.178.89.65:33332.service - OpenSSH per-connection server daemon (139.178.89.65:33332). Sep 3 23:25:58.810297 sshd[5066]: Accepted publickey for core from 139.178.89.65 port 33332 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:58.812990 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:58.821142 systemd-logind[1891]: New session 27 of user core. Sep 3 23:25:58.833016 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 3 23:26:01.361332 containerd[1916]: time="2025-09-03T23:26:01.361034963Z" level=info msg="StopContainer for \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" with timeout 30 (s)" Sep 3 23:26:01.365253 containerd[1916]: time="2025-09-03T23:26:01.364984055Z" level=info msg="Stop container \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" with signal terminated" Sep 3 23:26:01.390360 containerd[1916]: time="2025-09-03T23:26:01.389939135Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:26:01.401459 containerd[1916]: time="2025-09-03T23:26:01.401387279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" id:\"b4a2c86ba11404943c358f591e1c1c878484ab3b5f3ae26e23fe822795f85917\" pid:5088 exited_at:{seconds:1756941961 nanos:399094235}" Sep 3 23:26:01.406063 containerd[1916]: time="2025-09-03T23:26:01.405977039Z" level=info msg="StopContainer for \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" with timeout 2 (s)" Sep 3 23:26:01.406099 systemd[1]: cri-containerd-02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4.scope: Deactivated successfully. Sep 3 23:26:01.410518 containerd[1916]: time="2025-09-03T23:26:01.410394815Z" level=info msg="Stop container \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" with signal terminated" Sep 3 23:26:01.418242 containerd[1916]: time="2025-09-03T23:26:01.418132319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" id:\"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" pid:4108 exited_at:{seconds:1756941961 nanos:417462431}" Sep 3 23:26:01.418415 containerd[1916]: time="2025-09-03T23:26:01.418299071Z" level=info msg="received exit event container_id:\"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" id:\"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" pid:4108 exited_at:{seconds:1756941961 nanos:417462431}" Sep 3 23:26:01.449338 systemd-networkd[1749]: lxc_health: Link DOWN Sep 3 23:26:01.449364 systemd-networkd[1749]: lxc_health: Lost carrier Sep 3 23:26:01.478552 systemd[1]: cri-containerd-76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819.scope: Deactivated successfully. Sep 3 23:26:01.480305 systemd[1]: cri-containerd-76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819.scope: Consumed 15.918s CPU time, 126M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:26:01.484188 containerd[1916]: time="2025-09-03T23:26:01.484127676Z" level=info msg="received exit event container_id:\"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" id:\"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" pid:4142 exited_at:{seconds:1756941961 nanos:483747048}" Sep 3 23:26:01.484809 containerd[1916]: time="2025-09-03T23:26:01.484759608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" id:\"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" pid:4142 exited_at:{seconds:1756941961 nanos:483747048}" Sep 3 23:26:01.516279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4-rootfs.mount: Deactivated successfully. Sep 3 23:26:01.543366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819-rootfs.mount: Deactivated successfully. Sep 3 23:26:01.549755 containerd[1916]: time="2025-09-03T23:26:01.549618792Z" level=info msg="StopContainer for \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" returns successfully" Sep 3 23:26:01.551204 containerd[1916]: time="2025-09-03T23:26:01.550850796Z" level=info msg="StopPodSandbox for \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\"" Sep 3 23:26:01.551204 containerd[1916]: time="2025-09-03T23:26:01.550960860Z" level=info msg="Container to stop \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:01.555200 containerd[1916]: time="2025-09-03T23:26:01.555129264Z" level=info msg="StopContainer for \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" returns successfully" Sep 3 23:26:01.557756 containerd[1916]: time="2025-09-03T23:26:01.557441964Z" level=info msg="StopPodSandbox for \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\"" Sep 3 23:26:01.557756 containerd[1916]: time="2025-09-03T23:26:01.557566152Z" level=info msg="Container to stop \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:01.557756 containerd[1916]: time="2025-09-03T23:26:01.557606076Z" level=info msg="Container to stop \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:01.557756 containerd[1916]: time="2025-09-03T23:26:01.557630316Z" level=info msg="Container to stop \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:01.557756 containerd[1916]: time="2025-09-03T23:26:01.557653536Z" level=info msg="Container to stop \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:01.557756 containerd[1916]: time="2025-09-03T23:26:01.557675652Z" level=info msg="Container to stop \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:01.568049 systemd[1]: cri-containerd-4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e.scope: Deactivated successfully. Sep 3 23:26:01.573936 containerd[1916]: time="2025-09-03T23:26:01.573402948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" id:\"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" pid:3724 exit_status:137 exited_at:{seconds:1756941961 nanos:571856208}" Sep 3 23:26:01.582143 systemd[1]: cri-containerd-379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf.scope: Deactivated successfully. Sep 3 23:26:01.656186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf-rootfs.mount: Deactivated successfully. Sep 3 23:26:01.666498 containerd[1916]: time="2025-09-03T23:26:01.666423673Z" level=info msg="shim disconnected" id=379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf namespace=k8s.io Sep 3 23:26:01.667857 containerd[1916]: time="2025-09-03T23:26:01.666484957Z" level=warning msg="cleaning up after shim disconnected" id=379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf namespace=k8s.io Sep 3 23:26:01.667857 containerd[1916]: time="2025-09-03T23:26:01.666536329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:26:01.693386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e-rootfs.mount: Deactivated successfully. Sep 3 23:26:01.699299 containerd[1916]: time="2025-09-03T23:26:01.699110113Z" level=info msg="shim disconnected" id=4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e namespace=k8s.io Sep 3 23:26:01.699299 containerd[1916]: time="2025-09-03T23:26:01.699167197Z" level=warning msg="cleaning up after shim disconnected" id=4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e namespace=k8s.io Sep 3 23:26:01.699299 containerd[1916]: time="2025-09-03T23:26:01.699220045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:26:01.702735 containerd[1916]: time="2025-09-03T23:26:01.702390985Z" level=info msg="received exit event sandbox_id:\"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" exit_status:137 exited_at:{seconds:1756941961 nanos:588886884}" Sep 3 23:26:01.706646 containerd[1916]: time="2025-09-03T23:26:01.706474309Z" level=info msg="TearDown network for sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" successfully" Sep 3 23:26:01.706988 containerd[1916]: time="2025-09-03T23:26:01.706947133Z" level=info msg="StopPodSandbox for \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" returns successfully" Sep 3 23:26:01.708358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf-shm.mount: Deactivated successfully. Sep 3 23:26:01.740998 containerd[1916]: time="2025-09-03T23:26:01.740898865Z" level=info msg="received exit event sandbox_id:\"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" exit_status:137 exited_at:{seconds:1756941961 nanos:571856208}" Sep 3 23:26:01.742003 containerd[1916]: time="2025-09-03T23:26:01.741847585Z" level=error msg="Failed to handle event container_id:\"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" id:\"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" pid:3724 exit_status:137 exited_at:{seconds:1756941961 nanos:571856208} for 4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 3 23:26:01.742675 containerd[1916]: time="2025-09-03T23:26:01.742418089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" id:\"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" pid:3651 exit_status:137 exited_at:{seconds:1756941961 nanos:588886884}" Sep 3 23:26:01.744344 containerd[1916]: time="2025-09-03T23:26:01.744273181Z" level=info msg="TearDown network for sandbox \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" successfully" Sep 3 23:26:01.744344 containerd[1916]: time="2025-09-03T23:26:01.744330217Z" level=info msg="StopPodSandbox for \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" returns successfully" Sep 3 23:26:01.761135 kubelet[3503]: I0903 23:26:01.761073 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-etc-cni-netd\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.761135 kubelet[3503]: I0903 23:26:01.761139 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hostproc\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.762935 kubelet[3503]: I0903 23:26:01.761188 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-config-path\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.762935 kubelet[3503]: I0903 23:26:01.761228 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-kernel\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.762935 kubelet[3503]: I0903 23:26:01.761266 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hubble-tls\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.762935 kubelet[3503]: I0903 23:26:01.761305 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0ffc4e-96cd-44d5-8f74-18d21628d404-clustermesh-secrets\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.762935 kubelet[3503]: I0903 23:26:01.761338 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-cgroup\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.762935 kubelet[3503]: I0903 23:26:01.761370 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-xtables-lock\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.763271 kubelet[3503]: I0903 23:26:01.761400 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cni-path\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.763271 kubelet[3503]: I0903 23:26:01.761436 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-bpf-maps\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.763271 kubelet[3503]: I0903 23:26:01.761467 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-lib-modules\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.763271 kubelet[3503]: I0903 23:26:01.761502 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-net\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.763271 kubelet[3503]: I0903 23:26:01.761540 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-run\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.763271 kubelet[3503]: I0903 23:26:01.761579 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwst\" (UniqueName: \"kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-kube-api-access-xtwst\") pod \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\" (UID: \"7c0ffc4e-96cd-44d5-8f74-18d21628d404\") " Sep 3 23:26:01.763614 kubelet[3503]: I0903 23:26:01.762892 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.763614 kubelet[3503]: I0903 23:26:01.763457 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.765535 kubelet[3503]: I0903 23:26:01.763598 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cni-path" (OuterVolumeSpecName: "cni-path") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.765535 kubelet[3503]: I0903 23:26:01.764438 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.765535 kubelet[3503]: I0903 23:26:01.764529 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.765535 kubelet[3503]: I0903 23:26:01.764611 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.765535 kubelet[3503]: I0903 23:26:01.764652 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.767057 kubelet[3503]: I0903 23:26:01.764784 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hostproc" (OuterVolumeSpecName: "hostproc") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.767057 kubelet[3503]: I0903 23:26:01.764876 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.767057 kubelet[3503]: I0903 23:26:01.765634 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 3 23:26:01.780018 kubelet[3503]: I0903 23:26:01.779918 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 3 23:26:01.782181 kubelet[3503]: I0903 23:26:01.780167 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:26:01.784406 kubelet[3503]: I0903 23:26:01.784327 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0ffc4e-96cd-44d5-8f74-18d21628d404-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 3 23:26:01.785471 kubelet[3503]: I0903 23:26:01.785418 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-kube-api-access-xtwst" (OuterVolumeSpecName: "kube-api-access-xtwst") pod "7c0ffc4e-96cd-44d5-8f74-18d21628d404" (UID: "7c0ffc4e-96cd-44d5-8f74-18d21628d404"). InnerVolumeSpecName "kube-api-access-xtwst". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:26:01.862180 kubelet[3503]: I0903 23:26:01.862119 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/985e0fb2-dd67-41eb-a86d-46b6ce869cca-cilium-config-path\") pod \"985e0fb2-dd67-41eb-a86d-46b6ce869cca\" (UID: \"985e0fb2-dd67-41eb-a86d-46b6ce869cca\") " Sep 3 23:26:01.862491 kubelet[3503]: I0903 23:26:01.862436 3503 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khvlf\" (UniqueName: \"kubernetes.io/projected/985e0fb2-dd67-41eb-a86d-46b6ce869cca-kube-api-access-khvlf\") pod \"985e0fb2-dd67-41eb-a86d-46b6ce869cca\" (UID: \"985e0fb2-dd67-41eb-a86d-46b6ce869cca\") " Sep 3 23:26:01.862791 kubelet[3503]: I0903 23:26:01.862618 3503 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-lib-modules\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863010 kubelet[3503]: I0903 23:26:01.862875 3503 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-net\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863201 kubelet[3503]: I0903 23:26:01.862906 3503 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-run\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863201 kubelet[3503]: I0903 23:26:01.863145 3503 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwst\" (UniqueName: \"kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-kube-api-access-xtwst\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863420 kubelet[3503]: I0903 23:26:01.863172 3503 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-etc-cni-netd\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863420 kubelet[3503]: I0903 23:26:01.863370 3503 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hostproc\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863652 kubelet[3503]: I0903 23:26:01.863394 3503 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-config-path\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863652 kubelet[3503]: I0903 23:26:01.863593 3503 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-xtables-lock\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.863652 kubelet[3503]: I0903 23:26:01.863616 3503 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cni-path\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.864091 kubelet[3503]: I0903 23:26:01.863935 3503 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-host-proc-sys-kernel\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.864091 kubelet[3503]: I0903 23:26:01.863980 3503 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0ffc4e-96cd-44d5-8f74-18d21628d404-hubble-tls\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.864091 kubelet[3503]: I0903 23:26:01.864031 3503 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0ffc4e-96cd-44d5-8f74-18d21628d404-clustermesh-secrets\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.864091 kubelet[3503]: I0903 23:26:01.864052 3503 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-cilium-cgroup\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.864523 kubelet[3503]: I0903 23:26:01.864358 3503 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0ffc4e-96cd-44d5-8f74-18d21628d404-bpf-maps\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.867163 kubelet[3503]: I0903 23:26:01.867052 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985e0fb2-dd67-41eb-a86d-46b6ce869cca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "985e0fb2-dd67-41eb-a86d-46b6ce869cca" (UID: "985e0fb2-dd67-41eb-a86d-46b6ce869cca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 3 23:26:01.870029 kubelet[3503]: I0903 23:26:01.869908 3503 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985e0fb2-dd67-41eb-a86d-46b6ce869cca-kube-api-access-khvlf" (OuterVolumeSpecName: "kube-api-access-khvlf") pod "985e0fb2-dd67-41eb-a86d-46b6ce869cca" (UID: "985e0fb2-dd67-41eb-a86d-46b6ce869cca"). InnerVolumeSpecName "kube-api-access-khvlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:26:01.965944 kubelet[3503]: I0903 23:26:01.965137 3503 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/985e0fb2-dd67-41eb-a86d-46b6ce869cca-cilium-config-path\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:01.965944 kubelet[3503]: I0903 23:26:01.965210 3503 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khvlf\" (UniqueName: \"kubernetes.io/projected/985e0fb2-dd67-41eb-a86d-46b6ce869cca-kube-api-access-khvlf\") on node \"ip-172-31-18-182\" DevicePath \"\"" Sep 3 23:26:02.273853 kubelet[3503]: I0903 23:26:02.272996 3503 scope.go:117] "RemoveContainer" containerID="02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4" Sep 3 23:26:02.284078 containerd[1916]: time="2025-09-03T23:26:02.282943212Z" level=info msg="RemoveContainer for \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\"" Sep 3 23:26:02.291138 containerd[1916]: time="2025-09-03T23:26:02.290914500Z" level=info msg="RemoveContainer for \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" returns successfully" Sep 3 23:26:02.291524 kubelet[3503]: I0903 23:26:02.291493 3503 scope.go:117] "RemoveContainer" containerID="02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4" Sep 3 23:26:02.295390 systemd[1]: Removed slice kubepods-besteffort-pod985e0fb2_dd67_41eb_a86d_46b6ce869cca.slice - libcontainer container kubepods-besteffort-pod985e0fb2_dd67_41eb_a86d_46b6ce869cca.slice. Sep 3 23:26:02.297172 containerd[1916]: time="2025-09-03T23:26:02.296648712Z" level=error msg="ContainerStatus for \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\": not found" Sep 3 23:26:02.305345 kubelet[3503]: E0903 23:26:02.304998 3503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\": not found" containerID="02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4" Sep 3 23:26:02.305345 kubelet[3503]: I0903 23:26:02.305058 3503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4"} err="failed to get container status \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"02911eb1f862d84a60cc9154e057d41a04c2940cac92fdacb7e804feb5b700b4\": not found" Sep 3 23:26:02.305345 kubelet[3503]: I0903 23:26:02.305228 3503 scope.go:117] "RemoveContainer" containerID="76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819" Sep 3 23:26:02.309404 systemd[1]: Removed slice kubepods-burstable-pod7c0ffc4e_96cd_44d5_8f74_18d21628d404.slice - libcontainer container kubepods-burstable-pod7c0ffc4e_96cd_44d5_8f74_18d21628d404.slice. Sep 3 23:26:02.309674 systemd[1]: kubepods-burstable-pod7c0ffc4e_96cd_44d5_8f74_18d21628d404.slice: Consumed 16.129s CPU time, 126.5M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:26:02.312606 containerd[1916]: time="2025-09-03T23:26:02.311856072Z" level=info msg="RemoveContainer for \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\"" Sep 3 23:26:02.324557 containerd[1916]: time="2025-09-03T23:26:02.324456612Z" level=info msg="RemoveContainer for \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" returns successfully" Sep 3 23:26:02.325002 kubelet[3503]: I0903 23:26:02.324930 3503 scope.go:117] "RemoveContainer" containerID="8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016" Sep 3 23:26:02.330290 containerd[1916]: time="2025-09-03T23:26:02.328160160Z" level=info msg="RemoveContainer for \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\"" Sep 3 23:26:02.341873 containerd[1916]: time="2025-09-03T23:26:02.341820588Z" level=info msg="RemoveContainer for \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" returns successfully" Sep 3 23:26:02.343148 kubelet[3503]: I0903 23:26:02.343012 3503 scope.go:117] "RemoveContainer" containerID="77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd" Sep 3 23:26:02.354677 containerd[1916]: time="2025-09-03T23:26:02.354620064Z" level=info msg="RemoveContainer for \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\"" Sep 3 23:26:02.370954 containerd[1916]: time="2025-09-03T23:26:02.370851996Z" level=info msg="RemoveContainer for \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" returns successfully" Sep 3 23:26:02.372092 kubelet[3503]: I0903 23:26:02.372039 3503 scope.go:117] "RemoveContainer" containerID="3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81" Sep 3 23:26:02.374937 containerd[1916]: time="2025-09-03T23:26:02.374893608Z" level=info msg="RemoveContainer for \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\"" Sep 3 23:26:02.383201 containerd[1916]: time="2025-09-03T23:26:02.383149284Z" level=info msg="RemoveContainer for \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" returns successfully" Sep 3 23:26:02.383818 kubelet[3503]: I0903 23:26:02.383752 3503 scope.go:117] "RemoveContainer" containerID="b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4" Sep 3 23:26:02.390085 containerd[1916]: time="2025-09-03T23:26:02.389945796Z" level=info msg="RemoveContainer for \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\"" Sep 3 23:26:02.405278 containerd[1916]: time="2025-09-03T23:26:02.405135288Z" level=info msg="RemoveContainer for \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" returns successfully" Sep 3 23:26:02.405752 kubelet[3503]: I0903 23:26:02.405675 3503 scope.go:117] "RemoveContainer" containerID="76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819" Sep 3 23:26:02.406138 containerd[1916]: time="2025-09-03T23:26:02.406085124Z" level=error msg="ContainerStatus for \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\": not found" Sep 3 23:26:02.406512 kubelet[3503]: E0903 23:26:02.406475 3503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\": not found" containerID="76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819" Sep 3 23:26:02.406737 kubelet[3503]: I0903 23:26:02.406670 3503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819"} err="failed to get container status \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\": rpc error: code = NotFound desc = an error occurred when try to find container \"76b0cfc81bf4689e4dc616a12a4629011aadd630c15f2c18cfb3b39107705819\": not found" Sep 3 23:26:02.406859 kubelet[3503]: I0903 23:26:02.406837 3503 scope.go:117] "RemoveContainer" containerID="8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016" Sep 3 23:26:02.407415 containerd[1916]: time="2025-09-03T23:26:02.407335140Z" level=error msg="ContainerStatus for \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\": not found" Sep 3 23:26:02.407591 kubelet[3503]: E0903 23:26:02.407551 3503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\": not found" containerID="8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016" Sep 3 23:26:02.407656 kubelet[3503]: I0903 23:26:02.407604 3503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016"} err="failed to get container status \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\": rpc error: code = NotFound desc = an error occurred when try to find container \"8449afcf4368041816058143e825ff993f7229c65a5a6fc106bd47c114cba016\": not found" Sep 3 23:26:02.407656 kubelet[3503]: I0903 23:26:02.407638 3503 scope.go:117] "RemoveContainer" containerID="77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd" Sep 3 23:26:02.409138 containerd[1916]: time="2025-09-03T23:26:02.409084032Z" level=error msg="ContainerStatus for \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\": not found" Sep 3 23:26:02.409535 kubelet[3503]: E0903 23:26:02.409499 3503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\": not found" containerID="77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd" Sep 3 23:26:02.410038 kubelet[3503]: I0903 23:26:02.409852 3503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd"} err="failed to get container status \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"77361b9a6092660770835ef54dbb6d1d470cae46afce5ea630879695b741f1dd\": not found" Sep 3 23:26:02.410038 kubelet[3503]: I0903 23:26:02.409900 3503 scope.go:117] "RemoveContainer" containerID="3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81" Sep 3 23:26:02.410584 containerd[1916]: time="2025-09-03T23:26:02.410499144Z" level=error msg="ContainerStatus for \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\": not found" Sep 3 23:26:02.411220 kubelet[3503]: E0903 23:26:02.410912 3503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\": not found" containerID="3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81" Sep 3 23:26:02.411220 kubelet[3503]: I0903 23:26:02.410958 3503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81"} err="failed to get container status \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b01ea670ea0052a23674603deaa6bd92cd79a712ba6e95b055bea96f6082a81\": not found" Sep 3 23:26:02.411220 kubelet[3503]: I0903 23:26:02.410991 3503 scope.go:117] "RemoveContainer" containerID="b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4" Sep 3 23:26:02.411781 containerd[1916]: time="2025-09-03T23:26:02.411418560Z" level=error msg="ContainerStatus for \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\": not found" Sep 3 23:26:02.412147 kubelet[3503]: E0903 23:26:02.412044 3503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\": not found" containerID="b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4" Sep 3 23:26:02.412147 kubelet[3503]: I0903 23:26:02.412092 3503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4"} err="failed to get container status \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4b85bd4ef93d22f254c1757cd7658cdd4d47654fefbccab6abfbcfd3accf2b4\": not found" Sep 3 23:26:02.514275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e-shm.mount: Deactivated successfully. Sep 3 23:26:02.514467 systemd[1]: var-lib-kubelet-pods-985e0fb2\x2ddd67\x2d41eb\x2da86d\x2d46b6ce869cca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhvlf.mount: Deactivated successfully. Sep 3 23:26:02.514597 systemd[1]: var-lib-kubelet-pods-7c0ffc4e\x2d96cd\x2d44d5\x2d8f74\x2d18d21628d404-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxtwst.mount: Deactivated successfully. Sep 3 23:26:02.514755 systemd[1]: var-lib-kubelet-pods-7c0ffc4e\x2d96cd\x2d44d5\x2d8f74\x2d18d21628d404-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 3 23:26:02.514888 systemd[1]: var-lib-kubelet-pods-7c0ffc4e\x2d96cd\x2d44d5\x2d8f74\x2d18d21628d404-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 3 23:26:03.255741 sshd[5068]: Connection closed by 139.178.89.65 port 33332 Sep 3 23:26:03.256035 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:03.261659 systemd[1]: sshd@26-172.31.18.182:22-139.178.89.65:33332.service: Deactivated successfully. Sep 3 23:26:03.266608 systemd[1]: session-27.scope: Deactivated successfully. Sep 3 23:26:03.267554 systemd[1]: session-27.scope: Consumed 1.731s CPU time, 23.5M memory peak. Sep 3 23:26:03.273561 systemd-logind[1891]: Session 27 logged out. Waiting for processes to exit. Sep 3 23:26:03.290250 systemd-logind[1891]: Removed session 27. Sep 3 23:26:03.294167 systemd[1]: Started sshd@27-172.31.18.182:22-139.178.89.65:49488.service - OpenSSH per-connection server daemon (139.178.89.65:49488). Sep 3 23:26:03.461404 containerd[1916]: time="2025-09-03T23:26:03.461217061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" id:\"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" pid:3724 exit_status:137 exited_at:{seconds:1756941961 nanos:571856208}" Sep 3 23:26:03.490656 sshd[5222]: Accepted publickey for core from 139.178.89.65 port 49488 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:03.493267 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:03.501096 systemd-logind[1891]: New session 28 of user core. Sep 3 23:26:03.509964 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 3 23:26:03.751619 kubelet[3503]: I0903 23:26:03.751471 3503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0ffc4e-96cd-44d5-8f74-18d21628d404" path="/var/lib/kubelet/pods/7c0ffc4e-96cd-44d5-8f74-18d21628d404/volumes" Sep 3 23:26:03.754652 kubelet[3503]: I0903 23:26:03.754591 3503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985e0fb2-dd67-41eb-a86d-46b6ce869cca" path="/var/lib/kubelet/pods/985e0fb2-dd67-41eb-a86d-46b6ce869cca/volumes" Sep 3 23:26:04.013134 ntpd[1883]: Deleting interface #12 lxc_health, fe80::4084:ceff:fe15:35c4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Sep 3 23:26:04.013621 ntpd[1883]: 3 Sep 23:26:04 ntpd[1883]: Deleting interface #12 lxc_health, fe80::4084:ceff:fe15:35c4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Sep 3 23:26:04.598847 sshd[5224]: Connection closed by 139.178.89.65 port 49488 Sep 3 23:26:04.600855 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:04.608880 systemd[1]: sshd@27-172.31.18.182:22-139.178.89.65:49488.service: Deactivated successfully. Sep 3 23:26:04.614831 systemd[1]: session-28.scope: Deactivated successfully. Sep 3 23:26:04.621209 systemd-logind[1891]: Session 28 logged out. Waiting for processes to exit. Sep 3 23:26:04.632288 kubelet[3503]: E0903 23:26:04.632204 3503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ffc4e-96cd-44d5-8f74-18d21628d404" containerName="mount-cgroup" Sep 3 23:26:04.632288 kubelet[3503]: E0903 23:26:04.632277 3503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ffc4e-96cd-44d5-8f74-18d21628d404" containerName="apply-sysctl-overwrites" Sep 3 23:26:04.632288 kubelet[3503]: E0903 23:26:04.632297 3503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ffc4e-96cd-44d5-8f74-18d21628d404" containerName="mount-bpf-fs" Sep 3 23:26:04.632501 kubelet[3503]: E0903 23:26:04.632313 3503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ffc4e-96cd-44d5-8f74-18d21628d404" containerName="clean-cilium-state" Sep 3 23:26:04.632501 kubelet[3503]: E0903 23:26:04.632353 3503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="985e0fb2-dd67-41eb-a86d-46b6ce869cca" containerName="cilium-operator" Sep 3 23:26:04.632501 kubelet[3503]: E0903 23:26:04.632369 3503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0ffc4e-96cd-44d5-8f74-18d21628d404" containerName="cilium-agent" Sep 3 23:26:04.632501 kubelet[3503]: I0903 23:26:04.632442 3503 memory_manager.go:354] "RemoveStaleState removing state" podUID="985e0fb2-dd67-41eb-a86d-46b6ce869cca" containerName="cilium-operator" Sep 3 23:26:04.632501 kubelet[3503]: I0903 23:26:04.632460 3503 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0ffc4e-96cd-44d5-8f74-18d21628d404" containerName="cilium-agent" Sep 3 23:26:04.649763 systemd-logind[1891]: Removed session 28. Sep 3 23:26:04.654284 systemd[1]: Started sshd@28-172.31.18.182:22-139.178.89.65:49500.service - OpenSSH per-connection server daemon (139.178.89.65:49500). Sep 3 23:26:04.672346 systemd[1]: Created slice kubepods-burstable-pod3ab6f87b_41e8_4bb4_8d2b_28ff6babd499.slice - libcontainer container kubepods-burstable-pod3ab6f87b_41e8_4bb4_8d2b_28ff6babd499.slice. Sep 3 23:26:04.682581 kubelet[3503]: I0903 23:26:04.681045 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-etc-cni-netd\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.682581 kubelet[3503]: I0903 23:26:04.681117 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-host-proc-sys-net\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.682581 kubelet[3503]: I0903 23:26:04.681154 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-cni-path\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.682581 kubelet[3503]: I0903 23:26:04.681188 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-host-proc-sys-kernel\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.682581 kubelet[3503]: I0903 23:26:04.681224 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-hubble-tls\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.682581 kubelet[3503]: I0903 23:26:04.681259 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-xtables-lock\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683007 kubelet[3503]: I0903 23:26:04.681295 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjq5s\" (UniqueName: \"kubernetes.io/projected/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-kube-api-access-hjq5s\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683007 kubelet[3503]: I0903 23:26:04.681334 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-cilium-run\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683007 kubelet[3503]: I0903 23:26:04.681372 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-hostproc\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683007 kubelet[3503]: I0903 23:26:04.681404 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-cilium-ipsec-secrets\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683007 kubelet[3503]: I0903 23:26:04.681441 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-bpf-maps\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683007 kubelet[3503]: I0903 23:26:04.681477 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-cilium-cgroup\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683315 kubelet[3503]: I0903 23:26:04.681513 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-lib-modules\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683315 kubelet[3503]: I0903 23:26:04.681551 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-clustermesh-secrets\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.683315 kubelet[3503]: I0903 23:26:04.681589 3503 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ab6f87b-41e8-4bb4-8d2b-28ff6babd499-cilium-config-path\") pod \"cilium-9vkdl\" (UID: \"3ab6f87b-41e8-4bb4-8d2b-28ff6babd499\") " pod="kube-system/cilium-9vkdl" Sep 3 23:26:04.944771 sshd[5234]: Accepted publickey for core from 139.178.89.65 port 49500 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:04.947513 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:04.957884 systemd-logind[1891]: New session 29 of user core. Sep 3 23:26:04.966106 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 3 23:26:04.992569 containerd[1916]: time="2025-09-03T23:26:04.992499101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vkdl,Uid:3ab6f87b-41e8-4bb4-8d2b-28ff6babd499,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:05.034025 containerd[1916]: time="2025-09-03T23:26:05.033958921Z" level=info msg="connecting to shim 765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5" address="unix:///run/containerd/s/268cc66ca39773d449457cac53a81499ebfd5941fe89071a6993457059687d5d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:05.090377 sshd[5240]: Connection closed by 139.178.89.65 port 49500 Sep 3 23:26:05.090440 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:05.092199 systemd[1]: Started cri-containerd-765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5.scope - libcontainer container 765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5. Sep 3 23:26:05.106445 systemd[1]: sshd@28-172.31.18.182:22-139.178.89.65:49500.service: Deactivated successfully. Sep 3 23:26:05.118664 systemd[1]: session-29.scope: Deactivated successfully. Sep 3 23:26:05.122206 systemd-logind[1891]: Session 29 logged out. Waiting for processes to exit. Sep 3 23:26:05.150262 systemd[1]: Started sshd@29-172.31.18.182:22-139.178.89.65:49516.service - OpenSSH per-connection server daemon (139.178.89.65:49516). Sep 3 23:26:05.154360 systemd-logind[1891]: Removed session 29. Sep 3 23:26:05.214457 containerd[1916]: time="2025-09-03T23:26:05.213529862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9vkdl,Uid:3ab6f87b-41e8-4bb4-8d2b-28ff6babd499,Namespace:kube-system,Attempt:0,} returns sandbox id \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\"" Sep 3 23:26:05.221300 containerd[1916]: time="2025-09-03T23:26:05.221237546Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:26:05.235018 containerd[1916]: time="2025-09-03T23:26:05.234914198Z" level=info msg="Container fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:05.249954 containerd[1916]: time="2025-09-03T23:26:05.249870242Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5\"" Sep 3 23:26:05.252353 containerd[1916]: time="2025-09-03T23:26:05.251180618Z" level=info msg="StartContainer for \"fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5\"" Sep 3 23:26:05.255928 containerd[1916]: time="2025-09-03T23:26:05.255464402Z" level=info msg="connecting to shim fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5" address="unix:///run/containerd/s/268cc66ca39773d449457cac53a81499ebfd5941fe89071a6993457059687d5d" protocol=ttrpc version=3 Sep 3 23:26:05.302054 systemd[1]: Started cri-containerd-fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5.scope - libcontainer container fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5. Sep 3 23:26:05.370905 sshd[5287]: Accepted publickey for core from 139.178.89.65 port 49516 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:05.376058 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:05.388174 systemd-logind[1891]: New session 30 of user core. Sep 3 23:26:05.395071 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 3 23:26:05.410053 containerd[1916]: time="2025-09-03T23:26:05.409986927Z" level=info msg="StartContainer for \"fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5\" returns successfully" Sep 3 23:26:05.428411 systemd[1]: cri-containerd-fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5.scope: Deactivated successfully. Sep 3 23:26:05.435707 containerd[1916]: time="2025-09-03T23:26:05.435615903Z" level=info msg="received exit event container_id:\"fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5\" id:\"fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5\" pid:5308 exited_at:{seconds:1756941965 nanos:435162495}" Sep 3 23:26:05.436455 containerd[1916]: time="2025-09-03T23:26:05.436391751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5\" id:\"fa17040540405d6964457051dddd425e3cb62eadc38a251bed3aa12424b22df5\" pid:5308 exited_at:{seconds:1756941965 nanos:435162495}" Sep 3 23:26:05.718881 containerd[1916]: time="2025-09-03T23:26:05.718811981Z" level=info msg="StopPodSandbox for \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\"" Sep 3 23:26:05.719599 containerd[1916]: time="2025-09-03T23:26:05.719552477Z" level=info msg="TearDown network for sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" successfully" Sep 3 23:26:05.719787 containerd[1916]: time="2025-09-03T23:26:05.719596913Z" level=info msg="StopPodSandbox for \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" returns successfully" Sep 3 23:26:05.720666 containerd[1916]: time="2025-09-03T23:26:05.720609161Z" level=info msg="RemovePodSandbox for \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\"" Sep 3 23:26:05.721120 containerd[1916]: time="2025-09-03T23:26:05.720673445Z" level=info msg="Forcibly stopping sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\"" Sep 3 23:26:05.721386 containerd[1916]: time="2025-09-03T23:26:05.721239605Z" level=info msg="TearDown network for sandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" successfully" Sep 3 23:26:05.725305 containerd[1916]: time="2025-09-03T23:26:05.724980353Z" level=info msg="Ensure that sandbox 379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf in task-service has been cleanup successfully" Sep 3 23:26:05.731476 containerd[1916]: time="2025-09-03T23:26:05.731425817Z" level=info msg="RemovePodSandbox \"379b9933e5f1ccc6df9c7efe26adf4baed019a196213153ac9c9fd9dcbefb1bf\" returns successfully" Sep 3 23:26:05.732958 containerd[1916]: time="2025-09-03T23:26:05.732895613Z" level=info msg="StopPodSandbox for \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\"" Sep 3 23:26:05.733422 containerd[1916]: time="2025-09-03T23:26:05.733244261Z" level=info msg="TearDown network for sandbox \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" successfully" Sep 3 23:26:05.733422 containerd[1916]: time="2025-09-03T23:26:05.733273433Z" level=info msg="StopPodSandbox for \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" returns successfully" Sep 3 23:26:05.734625 containerd[1916]: time="2025-09-03T23:26:05.734487257Z" level=info msg="RemovePodSandbox for \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\"" Sep 3 23:26:05.735129 containerd[1916]: time="2025-09-03T23:26:05.734856005Z" level=info msg="Forcibly stopping sandbox \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\"" Sep 3 23:26:05.735129 containerd[1916]: time="2025-09-03T23:26:05.735008657Z" level=info msg="TearDown network for sandbox \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" successfully" Sep 3 23:26:05.737244 containerd[1916]: time="2025-09-03T23:26:05.737198201Z" level=info msg="Ensure that sandbox 4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e in task-service has been cleanup successfully" Sep 3 23:26:05.743909 containerd[1916]: time="2025-09-03T23:26:05.743789549Z" level=info msg="RemovePodSandbox \"4f2f99fe33e8205bc4586970f76c45df7fe94ffb614eb594db575825d574d76e\" returns successfully" Sep 3 23:26:05.993534 kubelet[3503]: E0903 23:26:05.993156 3503 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:26:06.343604 containerd[1916]: time="2025-09-03T23:26:06.343445356Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:26:06.369726 containerd[1916]: time="2025-09-03T23:26:06.367323496Z" level=info msg="Container 07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:06.389418 containerd[1916]: time="2025-09-03T23:26:06.389314264Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027\"" Sep 3 23:26:06.391159 containerd[1916]: time="2025-09-03T23:26:06.391086520Z" level=info msg="StartContainer for \"07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027\"" Sep 3 23:26:06.396849 containerd[1916]: time="2025-09-03T23:26:06.396794392Z" level=info msg="connecting to shim 07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027" address="unix:///run/containerd/s/268cc66ca39773d449457cac53a81499ebfd5941fe89071a6993457059687d5d" protocol=ttrpc version=3 Sep 3 23:26:06.450039 systemd[1]: Started cri-containerd-07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027.scope - libcontainer container 07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027. Sep 3 23:26:06.518919 containerd[1916]: time="2025-09-03T23:26:06.518833973Z" level=info msg="StartContainer for \"07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027\" returns successfully" Sep 3 23:26:06.533138 systemd[1]: cri-containerd-07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027.scope: Deactivated successfully. Sep 3 23:26:06.538527 containerd[1916]: time="2025-09-03T23:26:06.538308353Z" level=info msg="received exit event container_id:\"07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027\" id:\"07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027\" pid:5361 exited_at:{seconds:1756941966 nanos:537350453}" Sep 3 23:26:06.540922 containerd[1916]: time="2025-09-03T23:26:06.540340481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027\" id:\"07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027\" pid:5361 exited_at:{seconds:1756941966 nanos:537350453}" Sep 3 23:26:06.578380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07726fdc1f3250cdb911f3f114942f0dd44bc2f2f1cb564663cacf29bf248027-rootfs.mount: Deactivated successfully. Sep 3 23:26:07.352155 containerd[1916]: time="2025-09-03T23:26:07.352067249Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:26:07.378987 containerd[1916]: time="2025-09-03T23:26:07.378906761Z" level=info msg="Container c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:07.399736 containerd[1916]: time="2025-09-03T23:26:07.399625325Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae\"" Sep 3 23:26:07.400906 containerd[1916]: time="2025-09-03T23:26:07.400830665Z" level=info msg="StartContainer for \"c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae\"" Sep 3 23:26:07.409370 containerd[1916]: time="2025-09-03T23:26:07.409194941Z" level=info msg="connecting to shim c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae" address="unix:///run/containerd/s/268cc66ca39773d449457cac53a81499ebfd5941fe89071a6993457059687d5d" protocol=ttrpc version=3 Sep 3 23:26:07.453055 systemd[1]: Started cri-containerd-c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae.scope - libcontainer container c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae. Sep 3 23:26:07.549075 systemd[1]: cri-containerd-c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae.scope: Deactivated successfully. Sep 3 23:26:07.553241 containerd[1916]: time="2025-09-03T23:26:07.553112142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae\" id:\"c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae\" pid:5409 exited_at:{seconds:1756941967 nanos:551608578}" Sep 3 23:26:07.554013 containerd[1916]: time="2025-09-03T23:26:07.553869354Z" level=info msg="received exit event container_id:\"c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae\" id:\"c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae\" pid:5409 exited_at:{seconds:1756941967 nanos:551608578}" Sep 3 23:26:07.556428 containerd[1916]: time="2025-09-03T23:26:07.556353834Z" level=info msg="StartContainer for \"c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae\" returns successfully" Sep 3 23:26:07.619446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c43d8b0c888506b900d9c2d9b5a8b6ac2c515de13429a103bedd5a180fe426ae-rootfs.mount: Deactivated successfully. Sep 3 23:26:08.162902 kubelet[3503]: I0903 23:26:08.162831 3503 setters.go:600] "Node became not ready" node="ip-172-31-18-182" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-03T23:26:08Z","lastTransitionTime":"2025-09-03T23:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 3 23:26:08.360319 containerd[1916]: time="2025-09-03T23:26:08.360222690Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:26:08.391733 containerd[1916]: time="2025-09-03T23:26:08.390648354Z" level=info msg="Container 6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:08.397797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707682837.mount: Deactivated successfully. Sep 3 23:26:08.416839 containerd[1916]: time="2025-09-03T23:26:08.416447214Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc\"" Sep 3 23:26:08.420164 containerd[1916]: time="2025-09-03T23:26:08.420112830Z" level=info msg="StartContainer for \"6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc\"" Sep 3 23:26:08.422432 containerd[1916]: time="2025-09-03T23:26:08.422260242Z" level=info msg="connecting to shim 6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc" address="unix:///run/containerd/s/268cc66ca39773d449457cac53a81499ebfd5941fe89071a6993457059687d5d" protocol=ttrpc version=3 Sep 3 23:26:08.462003 systemd[1]: Started cri-containerd-6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc.scope - libcontainer container 6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc. Sep 3 23:26:08.512130 systemd[1]: cri-containerd-6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc.scope: Deactivated successfully. Sep 3 23:26:08.516387 containerd[1916]: time="2025-09-03T23:26:08.516300859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc\" id:\"6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc\" pid:5451 exited_at:{seconds:1756941968 nanos:515384851}" Sep 3 23:26:08.522141 containerd[1916]: time="2025-09-03T23:26:08.521958151Z" level=info msg="received exit event container_id:\"6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc\" id:\"6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc\" pid:5451 exited_at:{seconds:1756941968 nanos:515384851}" Sep 3 23:26:08.537459 containerd[1916]: time="2025-09-03T23:26:08.537381463Z" level=info msg="StartContainer for \"6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc\" returns successfully" Sep 3 23:26:08.563997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aebcb255b9cf48d47af9ff3c3ce432a8ea00716fd5bef0c34c2ea3fc09706dc-rootfs.mount: Deactivated successfully. Sep 3 23:26:09.375226 containerd[1916]: time="2025-09-03T23:26:09.375151495Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:26:09.403088 containerd[1916]: time="2025-09-03T23:26:09.403009663Z" level=info msg="Container c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:09.430116 containerd[1916]: time="2025-09-03T23:26:09.430052059Z" level=info msg="CreateContainer within sandbox \"765dd86f9029507efce1f19f6b7f7612bfcd2b20952708e83141f245f342bec5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\"" Sep 3 23:26:09.432971 containerd[1916]: time="2025-09-03T23:26:09.432920179Z" level=info msg="StartContainer for \"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\"" Sep 3 23:26:09.435732 containerd[1916]: time="2025-09-03T23:26:09.435488203Z" level=info msg="connecting to shim c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51" address="unix:///run/containerd/s/268cc66ca39773d449457cac53a81499ebfd5941fe89071a6993457059687d5d" protocol=ttrpc version=3 Sep 3 23:26:09.477031 systemd[1]: Started cri-containerd-c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51.scope - libcontainer container c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51. Sep 3 23:26:09.555052 containerd[1916]: time="2025-09-03T23:26:09.554861696Z" level=info msg="StartContainer for \"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\" returns successfully" Sep 3 23:26:09.690829 containerd[1916]: time="2025-09-03T23:26:09.690213212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\" id:\"c5cc149d949060412844c5475b169f5dfafdb76ae374ba9616d1fbe1e1acc495\" pid:5519 exited_at:{seconds:1756941969 nanos:688634936}" Sep 3 23:26:10.474768 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 3 23:26:12.117309 containerd[1916]: time="2025-09-03T23:26:12.117242504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\" id:\"95bd5a9badabf47f511fd39b550016da74c7737060d441635c8994c22561d565\" pid:5599 exit_status:1 exited_at:{seconds:1756941972 nanos:113673044}" Sep 3 23:26:14.392375 containerd[1916]: time="2025-09-03T23:26:14.392222676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\" id:\"f1e161f0c478b5aeb0be3dab6af1d8eea6cbebcc280845c931fac4c02ba2eaff\" pid:5900 exit_status:1 exited_at:{seconds:1756941974 nanos:391415112}" Sep 3 23:26:15.077250 (udev-worker)[6027]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:26:15.079934 systemd-networkd[1749]: lxc_health: Link UP Sep 3 23:26:15.171562 systemd-networkd[1749]: lxc_health: Gained carrier Sep 3 23:26:16.803230 containerd[1916]: time="2025-09-03T23:26:16.803120860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\" id:\"5e0c2de04e527d547a6730a58623a2389ecaeca811200297246539d0df9b62f3\" pid:6064 exited_at:{seconds:1756941976 nanos:801089872}" Sep 3 23:26:17.051079 kubelet[3503]: I0903 23:26:17.050964 3503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9vkdl" podStartSLOduration=13.050940313 podStartE2EDuration="13.050940313s" podCreationTimestamp="2025-09-03 23:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:10.421235036 +0000 UTC m=+125.007173362" watchObservedRunningTime="2025-09-03 23:26:17.050940313 +0000 UTC m=+131.636878615" Sep 3 23:26:17.189766 systemd-networkd[1749]: lxc_health: Gained IPv6LL Sep 3 23:26:19.146345 containerd[1916]: time="2025-09-03T23:26:19.146276715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\" id:\"a535f30ca074f3bfe255392a23dd18624ad284e841d3910598358c973fa92792\" pid:6095 exited_at:{seconds:1756941979 nanos:145535235}" Sep 3 23:26:20.013224 ntpd[1883]: Listen normally on 15 lxc_health [fe80::246a:3fff:feaa:be6f%14]:123 Sep 3 23:26:20.014421 ntpd[1883]: 3 Sep 23:26:20 ntpd[1883]: Listen normally on 15 lxc_health [fe80::246a:3fff:feaa:be6f%14]:123 Sep 3 23:26:21.518595 containerd[1916]: time="2025-09-03T23:26:21.518531851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7a11b42178de1b6c5d0113f1e1c5559923e57e16b2c231b161678acdd040c51\" id:\"8327ec3863beaafd6265a34ba8ec454bea59f322069eec5a86d6dfc42a7fa50d\" pid:6116 exited_at:{seconds:1756941981 nanos:517625155}" Sep 3 23:26:21.556739 sshd[5324]: Connection closed by 139.178.89.65 port 49516 Sep 3 23:26:21.555993 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:21.566275 systemd[1]: sshd@29-172.31.18.182:22-139.178.89.65:49516.service: Deactivated successfully. Sep 3 23:26:21.575572 systemd[1]: session-30.scope: Deactivated successfully. Sep 3 23:26:21.578836 systemd-logind[1891]: Session 30 logged out. Waiting for processes to exit. Sep 3 23:26:21.585344 systemd-logind[1891]: Removed session 30. Sep 3 23:26:35.707311 systemd[1]: cri-containerd-f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b.scope: Deactivated successfully. Sep 3 23:26:35.708739 systemd[1]: cri-containerd-f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b.scope: Consumed 3.683s CPU time, 53.8M memory peak. Sep 3 23:26:35.714387 containerd[1916]: time="2025-09-03T23:26:35.714286090Z" level=info msg="received exit event container_id:\"f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b\" id:\"f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b\" pid:3055 exit_status:1 exited_at:{seconds:1756941995 nanos:713831110}" Sep 3 23:26:35.715503 containerd[1916]: time="2025-09-03T23:26:35.715355734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b\" id:\"f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b\" pid:3055 exit_status:1 exited_at:{seconds:1756941995 nanos:713831110}" Sep 3 23:26:35.761990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b-rootfs.mount: Deactivated successfully. Sep 3 23:26:36.485364 kubelet[3503]: I0903 23:26:36.485282 3503 scope.go:117] "RemoveContainer" containerID="f4dede76b5096a8373d1416b8663aadc2c15ef937777e729fc9630e93b80392b" Sep 3 23:26:36.488897 containerd[1916]: time="2025-09-03T23:26:36.488756913Z" level=info msg="CreateContainer within sandbox \"b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 3 23:26:36.505237 containerd[1916]: time="2025-09-03T23:26:36.505164382Z" level=info msg="Container 5cb9700962f0b5393c532651880b846e5f2223bfff9044f4c7eaffa2e3317c0a: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:36.523063 containerd[1916]: time="2025-09-03T23:26:36.522915850Z" level=info msg="CreateContainer within sandbox \"b8a1539191c4cc58e6ab5bf7844ef9f469ff6457088cd161a08ee0e24de8f39d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5cb9700962f0b5393c532651880b846e5f2223bfff9044f4c7eaffa2e3317c0a\"" Sep 3 23:26:36.524327 containerd[1916]: time="2025-09-03T23:26:36.524160130Z" level=info msg="StartContainer for \"5cb9700962f0b5393c532651880b846e5f2223bfff9044f4c7eaffa2e3317c0a\"" Sep 3 23:26:36.526321 containerd[1916]: time="2025-09-03T23:26:36.526250146Z" level=info msg="connecting to shim 5cb9700962f0b5393c532651880b846e5f2223bfff9044f4c7eaffa2e3317c0a" address="unix:///run/containerd/s/d5cba1dc243dcdf69af0d63c63908288390cca07bc11b489fb6e4f7e2cddff71" protocol=ttrpc version=3 Sep 3 23:26:36.565977 systemd[1]: Started cri-containerd-5cb9700962f0b5393c532651880b846e5f2223bfff9044f4c7eaffa2e3317c0a.scope - libcontainer container 5cb9700962f0b5393c532651880b846e5f2223bfff9044f4c7eaffa2e3317c0a. Sep 3 23:26:36.652068 containerd[1916]: time="2025-09-03T23:26:36.651917614Z" level=info msg="StartContainer for \"5cb9700962f0b5393c532651880b846e5f2223bfff9044f4c7eaffa2e3317c0a\" returns successfully" Sep 3 23:26:39.274080 kubelet[3503]: E0903 23:26:39.272951 3503 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 3 23:26:41.959030 systemd[1]: cri-containerd-cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66.scope: Deactivated successfully. Sep 3 23:26:41.959725 systemd[1]: cri-containerd-cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66.scope: Consumed 2.986s CPU time, 20.6M memory peak. Sep 3 23:26:41.967214 containerd[1916]: time="2025-09-03T23:26:41.967094549Z" level=info msg="received exit event container_id:\"cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66\" id:\"cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66\" pid:3083 exit_status:1 exited_at:{seconds:1756942001 nanos:966444641}" Sep 3 23:26:41.968139 containerd[1916]: time="2025-09-03T23:26:41.967914953Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66\" id:\"cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66\" pid:3083 exit_status:1 exited_at:{seconds:1756942001 nanos:966444641}" Sep 3 23:26:42.006810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66-rootfs.mount: Deactivated successfully. Sep 3 23:26:42.514203 kubelet[3503]: I0903 23:26:42.513428 3503 scope.go:117] "RemoveContainer" containerID="cbe3573b6da1e02d648a60852292dd689c87f3e18f0261f454bc86e85fb5ae66" Sep 3 23:26:42.516750 containerd[1916]: time="2025-09-03T23:26:42.516260043Z" level=info msg="CreateContainer within sandbox \"66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 3 23:26:42.540837 containerd[1916]: time="2025-09-03T23:26:42.539030068Z" level=info msg="Container 13419a1f1386c41e758dc1b8ff4124df2b4dfbb31c9bd661ede476fe8208c5ce: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:42.554532 containerd[1916]: time="2025-09-03T23:26:42.554477644Z" level=info msg="CreateContainer within sandbox \"66902b48468792864fef6f0cd8583f0acb192cc3b7fb2c5ab14fcaca2f3cd0b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"13419a1f1386c41e758dc1b8ff4124df2b4dfbb31c9bd661ede476fe8208c5ce\"" Sep 3 23:26:42.555836 containerd[1916]: time="2025-09-03T23:26:42.555797836Z" level=info msg="StartContainer for \"13419a1f1386c41e758dc1b8ff4124df2b4dfbb31c9bd661ede476fe8208c5ce\"" Sep 3 23:26:42.558254 containerd[1916]: time="2025-09-03T23:26:42.558203392Z" level=info msg="connecting to shim 13419a1f1386c41e758dc1b8ff4124df2b4dfbb31c9bd661ede476fe8208c5ce" address="unix:///run/containerd/s/bf4ac10640d90be42ea6155531503b187fe47f2d7221ac1e3216fa07cacd5e7e" protocol=ttrpc version=3 Sep 3 23:26:42.603017 systemd[1]: Started cri-containerd-13419a1f1386c41e758dc1b8ff4124df2b4dfbb31c9bd661ede476fe8208c5ce.scope - libcontainer container 13419a1f1386c41e758dc1b8ff4124df2b4dfbb31c9bd661ede476fe8208c5ce. Sep 3 23:26:42.678421 containerd[1916]: time="2025-09-03T23:26:42.678368296Z" level=info msg="StartContainer for \"13419a1f1386c41e758dc1b8ff4124df2b4dfbb31c9bd661ede476fe8208c5ce\" returns successfully" Sep 3 23:26:49.273653 kubelet[3503]: E0903 23:26:49.273583 3503 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"