Sep 3 23:22:59.145462 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 3 23:22:59.145508 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:22:59.147921 kernel: KASLR disabled due to lack of seed Sep 3 23:22:59.147945 kernel: efi: EFI v2.7 by EDK II Sep 3 23:22:59.148010 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 3 23:22:59.148029 kernel: secureboot: Secure boot disabled Sep 3 23:22:59.148072 kernel: ACPI: Early table checksum verification disabled Sep 3 23:22:59.148090 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 3 23:22:59.148107 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 3 23:22:59.148123 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 3 23:22:59.148140 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 3 23:22:59.148162 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 3 23:22:59.148178 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 3 23:22:59.148194 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 3 23:22:59.148213 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 3 23:22:59.148229 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 3 23:22:59.148251 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 3 23:22:59.148268 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 3 23:22:59.148284 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 3 23:22:59.148300 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 3 23:22:59.148317 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 3 23:22:59.148333 kernel: printk: legacy bootconsole [uart0] enabled Sep 3 23:22:59.148349 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:22:59.148366 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 3 23:22:59.148382 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 3 23:22:59.148399 kernel: Zone ranges: Sep 3 23:22:59.148416 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 3 23:22:59.148436 kernel: DMA32 empty Sep 3 23:22:59.148452 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 3 23:22:59.148469 kernel: Device empty Sep 3 23:22:59.148485 kernel: Movable zone start for each node Sep 3 23:22:59.148501 kernel: Early memory node ranges Sep 3 23:22:59.148517 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 3 23:22:59.148597 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 3 23:22:59.148615 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 3 23:22:59.148632 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 3 23:22:59.148648 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 3 23:22:59.148665 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 3 23:22:59.148681 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 3 23:22:59.148704 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 3 23:22:59.148727 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 3 23:22:59.148745 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 3 23:22:59.148763 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 3 23:22:59.148780 kernel: psci: probing for conduit method from ACPI. Sep 3 23:22:59.148801 kernel: psci: PSCIv1.0 detected in firmware. Sep 3 23:22:59.148819 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:22:59.148836 kernel: psci: Trusted OS migration not required Sep 3 23:22:59.148853 kernel: psci: SMC Calling Convention v1.1 Sep 3 23:22:59.148870 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 3 23:22:59.148888 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:22:59.148905 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:22:59.148923 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 3 23:22:59.148940 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:22:59.148958 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:22:59.148975 kernel: CPU features: detected: Spectre-v2 Sep 3 23:22:59.148995 kernel: CPU features: detected: Spectre-v3a Sep 3 23:22:59.149013 kernel: CPU features: detected: Spectre-BHB Sep 3 23:22:59.149030 kernel: CPU features: detected: ARM erratum 1742098 Sep 3 23:22:59.149047 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 3 23:22:59.149064 kernel: alternatives: applying boot alternatives Sep 3 23:22:59.149084 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:22:59.149103 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:22:59.149120 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:22:59.149138 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:22:59.149155 kernel: Fallback order for Node 0: 0 Sep 3 23:22:59.149176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 3 23:22:59.149193 kernel: Policy zone: Normal Sep 3 23:22:59.149210 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:22:59.149228 kernel: software IO TLB: area num 2. Sep 3 23:22:59.149245 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Sep 3 23:22:59.149262 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 3 23:22:59.149279 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:22:59.149298 kernel: rcu: RCU event tracing is enabled. Sep 3 23:22:59.149315 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 3 23:22:59.149333 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:22:59.149351 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:22:59.149368 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:22:59.149389 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 3 23:22:59.149407 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:22:59.149424 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 3 23:22:59.149442 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:22:59.149459 kernel: GICv3: 96 SPIs implemented Sep 3 23:22:59.149476 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:22:59.149493 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:22:59.149510 kernel: GICv3: GICv3 features: 16 PPIs Sep 3 23:22:59.150795 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 3 23:22:59.150821 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 3 23:22:59.150839 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 3 23:22:59.150857 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 3 23:22:59.150886 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 3 23:22:59.150904 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 3 23:22:59.150921 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 3 23:22:59.150938 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 3 23:22:59.150956 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:22:59.150974 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 3 23:22:59.150991 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 3 23:22:59.151009 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 3 23:22:59.151027 kernel: Console: colour dummy device 80x25 Sep 3 23:22:59.151045 kernel: printk: legacy console [tty1] enabled Sep 3 23:22:59.151063 kernel: ACPI: Core revision 20240827 Sep 3 23:22:59.151086 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 3 23:22:59.151104 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:22:59.151122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:22:59.151140 kernel: landlock: Up and running. Sep 3 23:22:59.151157 kernel: SELinux: Initializing. Sep 3 23:22:59.151175 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:22:59.151193 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:22:59.151211 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:22:59.151229 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:22:59.151251 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:22:59.151269 kernel: Remapping and enabling EFI services. Sep 3 23:22:59.151287 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:22:59.151305 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:22:59.151322 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 3 23:22:59.151340 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 3 23:22:59.151358 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 3 23:22:59.151375 kernel: smp: Brought up 1 node, 2 CPUs Sep 3 23:22:59.151393 kernel: SMP: Total of 2 processors activated. Sep 3 23:22:59.151424 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:22:59.151443 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:22:59.151465 kernel: CPU features: detected: 32-bit EL1 Support Sep 3 23:22:59.151483 kernel: CPU features: detected: CRC32 instructions Sep 3 23:22:59.151502 kernel: alternatives: applying system-wide alternatives Sep 3 23:22:59.151521 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Sep 3 23:22:59.152371 kernel: devtmpfs: initialized Sep 3 23:22:59.152411 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:22:59.152431 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 3 23:22:59.152450 kernel: 17040 pages in range for non-PLT usage Sep 3 23:22:59.152468 kernel: 508560 pages in range for PLT usage Sep 3 23:22:59.152486 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:22:59.152505 kernel: SMBIOS 3.0.0 present. Sep 3 23:22:59.152548 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 3 23:22:59.152572 kernel: DMI: Memory slots populated: 0/0 Sep 3 23:22:59.152590 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:22:59.152614 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:22:59.152634 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:22:59.152653 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:22:59.152672 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:22:59.152691 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Sep 3 23:22:59.152711 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:22:59.152730 kernel: cpuidle: using governor menu Sep 3 23:22:59.152749 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:22:59.152767 kernel: ASID allocator initialised with 65536 entries Sep 3 23:22:59.152790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:22:59.152809 kernel: Serial: AMBA PL011 UART driver Sep 3 23:22:59.152828 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:22:59.152846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:22:59.152865 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:22:59.152883 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:22:59.152902 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:22:59.152921 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:22:59.152940 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:22:59.152963 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:22:59.152982 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:22:59.153000 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:22:59.153019 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:22:59.153037 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:22:59.153055 kernel: ACPI: Interpreter enabled Sep 3 23:22:59.153074 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:22:59.153093 kernel: ACPI: MCFG table detected, 1 entries Sep 3 23:22:59.153111 kernel: ACPI: CPU0 has been hot-added Sep 3 23:22:59.153134 kernel: ACPI: CPU1 has been hot-added Sep 3 23:22:59.153153 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 3 23:22:59.153456 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:22:59.153778 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 3 23:22:59.153978 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 3 23:22:59.154171 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 3 23:22:59.154360 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 3 23:22:59.154394 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 3 23:22:59.154414 kernel: acpiphp: Slot [1] registered Sep 3 23:22:59.154432 kernel: acpiphp: Slot [2] registered Sep 3 23:22:59.154450 kernel: acpiphp: Slot [3] registered Sep 3 23:22:59.154469 kernel: acpiphp: Slot [4] registered Sep 3 23:22:59.154487 kernel: acpiphp: Slot [5] registered Sep 3 23:22:59.154505 kernel: acpiphp: Slot [6] registered Sep 3 23:22:59.154542 kernel: acpiphp: Slot [7] registered Sep 3 23:22:59.154565 kernel: acpiphp: Slot [8] registered Sep 3 23:22:59.154584 kernel: acpiphp: Slot [9] registered Sep 3 23:22:59.154608 kernel: acpiphp: Slot [10] registered Sep 3 23:22:59.154626 kernel: acpiphp: Slot [11] registered Sep 3 23:22:59.154644 kernel: acpiphp: Slot [12] registered Sep 3 23:22:59.154663 kernel: acpiphp: Slot [13] registered Sep 3 23:22:59.154681 kernel: acpiphp: Slot [14] registered Sep 3 23:22:59.154699 kernel: acpiphp: Slot [15] registered Sep 3 23:22:59.154717 kernel: acpiphp: Slot [16] registered Sep 3 23:22:59.154735 kernel: acpiphp: Slot [17] registered Sep 3 23:22:59.154753 kernel: acpiphp: Slot [18] registered Sep 3 23:22:59.154776 kernel: acpiphp: Slot [19] registered Sep 3 23:22:59.154794 kernel: acpiphp: Slot [20] registered Sep 3 23:22:59.154813 kernel: acpiphp: Slot [21] registered Sep 3 23:22:59.154831 kernel: acpiphp: Slot [22] registered Sep 3 23:22:59.154849 kernel: acpiphp: Slot [23] registered Sep 3 23:22:59.154867 kernel: acpiphp: Slot [24] registered Sep 3 23:22:59.154886 kernel: acpiphp: Slot [25] registered Sep 3 23:22:59.154904 kernel: acpiphp: Slot [26] registered Sep 3 23:22:59.154922 kernel: acpiphp: Slot [27] registered Sep 3 23:22:59.154940 kernel: acpiphp: Slot [28] registered Sep 3 23:22:59.154962 kernel: acpiphp: Slot [29] registered Sep 3 23:22:59.154980 kernel: acpiphp: Slot [30] registered Sep 3 23:22:59.154998 kernel: acpiphp: Slot [31] registered Sep 3 23:22:59.155017 kernel: PCI host bridge to bus 0000:00 Sep 3 23:22:59.155213 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 3 23:22:59.155390 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 3 23:22:59.155632 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 3 23:22:59.155820 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 3 23:22:59.156067 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:22:59.156288 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 3 23:22:59.156490 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 3 23:22:59.156795 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 3 23:22:59.156997 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 3 23:22:59.157197 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 3 23:22:59.157416 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 3 23:22:59.157721 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 3 23:22:59.158422 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 3 23:22:59.158663 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 3 23:22:59.158861 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 3 23:22:59.159057 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 3 23:22:59.159253 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 3 23:22:59.159460 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 3 23:22:59.159684 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 3 23:22:59.159886 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 3 23:22:59.160066 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 3 23:22:59.160240 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 3 23:22:59.160414 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 3 23:22:59.160444 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 3 23:22:59.160464 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 3 23:22:59.160483 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 3 23:22:59.160502 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 3 23:22:59.160520 kernel: iommu: Default domain type: Translated Sep 3 23:22:59.160560 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:22:59.160579 kernel: efivars: Registered efivars operations Sep 3 23:22:59.160598 kernel: vgaarb: loaded Sep 3 23:22:59.160616 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:22:59.160635 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:22:59.160659 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:22:59.160677 kernel: pnp: PnP ACPI init Sep 3 23:22:59.160892 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 3 23:22:59.160919 kernel: pnp: PnP ACPI: found 1 devices Sep 3 23:22:59.160938 kernel: NET: Registered PF_INET protocol family Sep 3 23:22:59.160957 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:22:59.160975 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:22:59.160994 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:22:59.161017 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:22:59.161035 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:22:59.161054 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:22:59.161072 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:22:59.161090 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:22:59.161108 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:22:59.161127 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:22:59.161145 kernel: kvm [1]: HYP mode not available Sep 3 23:22:59.161163 kernel: Initialise system trusted keyrings Sep 3 23:22:59.161187 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:22:59.161205 kernel: Key type asymmetric registered Sep 3 23:22:59.161223 kernel: Asymmetric key parser 'x509' registered Sep 3 23:22:59.161241 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:22:59.161259 kernel: io scheduler mq-deadline registered Sep 3 23:22:59.161278 kernel: io scheduler kyber registered Sep 3 23:22:59.161296 kernel: io scheduler bfq registered Sep 3 23:22:59.161495 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 3 23:22:59.161582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 3 23:22:59.161606 kernel: ACPI: button: Power Button [PWRB] Sep 3 23:22:59.161625 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 3 23:22:59.161643 kernel: ACPI: button: Sleep Button [SLPB] Sep 3 23:22:59.161662 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:22:59.161681 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 3 23:22:59.161889 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 3 23:22:59.161915 kernel: printk: legacy console [ttyS0] disabled Sep 3 23:22:59.161934 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 3 23:22:59.161960 kernel: printk: legacy console [ttyS0] enabled Sep 3 23:22:59.161978 kernel: printk: legacy bootconsole [uart0] disabled Sep 3 23:22:59.161996 kernel: thunder_xcv, ver 1.0 Sep 3 23:22:59.162015 kernel: thunder_bgx, ver 1.0 Sep 3 23:22:59.162033 kernel: nicpf, ver 1.0 Sep 3 23:22:59.162051 kernel: nicvf, ver 1.0 Sep 3 23:22:59.162264 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:22:59.162452 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:22:58 UTC (1756941778) Sep 3 23:22:59.162483 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:22:59.162502 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 3 23:22:59.162521 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:22:59.162583 kernel: watchdog: NMI not fully supported Sep 3 23:22:59.162602 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:22:59.162621 kernel: Segment Routing with IPv6 Sep 3 23:22:59.162639 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:22:59.162658 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:22:59.162676 kernel: Key type dns_resolver registered Sep 3 23:22:59.162701 kernel: registered taskstats version 1 Sep 3 23:22:59.162720 kernel: Loading compiled-in X.509 certificates Sep 3 23:22:59.162739 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:22:59.162757 kernel: Demotion targets for Node 0: null Sep 3 23:22:59.162775 kernel: Key type .fscrypt registered Sep 3 23:22:59.162793 kernel: Key type fscrypt-provisioning registered Sep 3 23:22:59.162812 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:22:59.162830 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:22:59.162849 kernel: ima: No architecture policies found Sep 3 23:22:59.162871 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:22:59.162889 kernel: clk: Disabling unused clocks Sep 3 23:22:59.162908 kernel: PM: genpd: Disabling unused power domains Sep 3 23:22:59.162926 kernel: Warning: unable to open an initial console. Sep 3 23:22:59.162945 kernel: Freeing unused kernel memory: 38976K Sep 3 23:22:59.162963 kernel: Run /init as init process Sep 3 23:22:59.162981 kernel: with arguments: Sep 3 23:22:59.162999 kernel: /init Sep 3 23:22:59.163017 kernel: with environment: Sep 3 23:22:59.163035 kernel: HOME=/ Sep 3 23:22:59.163057 kernel: TERM=linux Sep 3 23:22:59.163075 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:22:59.163095 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:22:59.163121 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:22:59.163142 systemd[1]: Detected virtualization amazon. Sep 3 23:22:59.163162 systemd[1]: Detected architecture arm64. Sep 3 23:22:59.163181 systemd[1]: Running in initrd. Sep 3 23:22:59.163205 systemd[1]: No hostname configured, using default hostname. Sep 3 23:22:59.163226 systemd[1]: Hostname set to . Sep 3 23:22:59.163246 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:22:59.163266 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:22:59.163286 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:22:59.163306 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:22:59.163327 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:22:59.163348 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:22:59.163373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:22:59.163395 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:22:59.163417 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:22:59.163438 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:22:59.163458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:22:59.163478 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:22:59.163499 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:22:59.163541 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:22:59.163594 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:22:59.163615 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:22:59.163635 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:22:59.163656 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:22:59.163677 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:22:59.163697 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:22:59.163718 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:22:59.163744 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:22:59.163765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:22:59.163786 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:22:59.163807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:22:59.163828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:22:59.163848 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:22:59.163869 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:22:59.163889 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:22:59.163909 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:22:59.163934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:22:59.163954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:22:59.163975 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:22:59.163996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:22:59.164022 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:22:59.164043 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:22:59.164064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:59.164085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:22:59.164106 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:22:59.164194 systemd-journald[255]: Collecting audit messages is disabled. Sep 3 23:22:59.164247 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:22:59.164269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:22:59.164290 kernel: Bridge firewalling registered Sep 3 23:22:59.164311 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:22:59.165262 systemd-journald[255]: Journal started Sep 3 23:22:59.165304 systemd-journald[255]: Runtime Journal (/run/log/journal/ec298ad2cbc415f3208e5897acda6643) is 8M, max 75.3M, 67.3M free. Sep 3 23:22:59.095577 systemd-modules-load[258]: Inserted module 'overlay' Sep 3 23:22:59.151609 systemd-modules-load[258]: Inserted module 'br_netfilter' Sep 3 23:22:59.178617 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:22:59.182547 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:22:59.199336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:22:59.206777 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:22:59.218935 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:22:59.224651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:22:59.238179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:22:59.253429 systemd-tmpfiles[291]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:22:59.266594 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:22:59.275252 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:22:59.290249 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:22:59.377038 systemd-resolved[306]: Positive Trust Anchors: Sep 3 23:22:59.381789 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:22:59.384941 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:22:59.442562 kernel: SCSI subsystem initialized Sep 3 23:22:59.449571 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:22:59.462604 kernel: iscsi: registered transport (tcp) Sep 3 23:22:59.483943 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:22:59.484016 kernel: QLogic iSCSI HBA Driver Sep 3 23:22:59.516646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:22:59.548315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:22:59.558987 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:22:59.631561 kernel: random: crng init done Sep 3 23:22:59.632290 systemd-resolved[306]: Defaulting to hostname 'linux'. Sep 3 23:22:59.639418 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:22:59.642093 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:22:59.662243 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:22:59.669757 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:22:59.767584 kernel: raid6: neonx8 gen() 6503 MB/s Sep 3 23:22:59.784559 kernel: raid6: neonx4 gen() 6471 MB/s Sep 3 23:22:59.802559 kernel: raid6: neonx2 gen() 5374 MB/s Sep 3 23:22:59.819558 kernel: raid6: neonx1 gen() 3934 MB/s Sep 3 23:22:59.836557 kernel: raid6: int64x8 gen() 3640 MB/s Sep 3 23:22:59.853568 kernel: raid6: int64x4 gen() 3682 MB/s Sep 3 23:22:59.870557 kernel: raid6: int64x2 gen() 3566 MB/s Sep 3 23:22:59.888534 kernel: raid6: int64x1 gen() 2772 MB/s Sep 3 23:22:59.888565 kernel: raid6: using algorithm neonx8 gen() 6503 MB/s Sep 3 23:22:59.907564 kernel: raid6: .... xor() 4727 MB/s, rmw enabled Sep 3 23:22:59.907598 kernel: raid6: using neon recovery algorithm Sep 3 23:22:59.916129 kernel: xor: measuring software checksum speed Sep 3 23:22:59.916180 kernel: 8regs : 12950 MB/sec Sep 3 23:22:59.917325 kernel: 32regs : 13041 MB/sec Sep 3 23:22:59.919592 kernel: arm64_neon : 8339 MB/sec Sep 3 23:22:59.919625 kernel: xor: using function: 32regs (13041 MB/sec) Sep 3 23:23:00.010570 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:23:00.023586 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:23:00.034691 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:00.098906 systemd-udevd[509]: Using default interface naming scheme 'v255'. Sep 3 23:23:00.110880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:00.116548 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:23:00.156050 dracut-pre-trigger[514]: rd.md=0: removing MD RAID activation Sep 3 23:23:00.197867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:23:00.204757 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:23:00.337857 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:00.345061 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:23:00.488188 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 3 23:23:00.488251 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 3 23:23:00.499067 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 3 23:23:00.499129 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 3 23:23:00.516347 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 3 23:23:00.517509 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 3 23:23:00.527433 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 3 23:23:00.533164 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:23:00.533238 kernel: GPT:9289727 != 16777215 Sep 3 23:23:00.533273 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:23:00.535287 kernel: GPT:9289727 != 16777215 Sep 3 23:23:00.536691 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:23:00.539554 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d5:16:b8:92:bd Sep 3 23:23:00.539907 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:23:00.542795 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:00.547262 (udev-worker)[581]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:23:00.556169 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:00.561469 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:00.571024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:00.578216 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:00.610572 kernel: nvme nvme0: using unchecked data buffer Sep 3 23:23:00.629257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:00.771360 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 3 23:23:00.806429 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 3 23:23:00.813578 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:23:00.836140 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 3 23:23:00.841981 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 3 23:23:00.882342 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 3 23:23:00.888372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:23:00.891150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:00.893865 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:23:00.902863 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:23:00.912780 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:23:00.943562 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:23:00.944461 disk-uuid[688]: Primary Header is updated. Sep 3 23:23:00.944461 disk-uuid[688]: Secondary Entries is updated. Sep 3 23:23:00.944461 disk-uuid[688]: Secondary Header is updated. Sep 3 23:23:00.954799 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:23:01.980600 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 3 23:23:01.984596 disk-uuid[694]: The operation has completed successfully. Sep 3 23:23:02.177544 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:23:02.179697 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:23:02.239425 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:23:02.262775 sh[956]: Success Sep 3 23:23:02.290286 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:23:02.290364 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:23:02.292612 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:23:02.307560 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:23:02.411686 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:23:02.419906 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:23:02.431491 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:23:02.465598 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (979) Sep 3 23:23:02.470062 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:23:02.470263 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:02.589538 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 3 23:23:02.589600 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:23:02.589627 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:23:02.613168 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:23:02.614143 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:23:02.620516 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:23:02.622719 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:23:02.637926 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:23:02.688575 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1010) Sep 3 23:23:02.694051 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:02.694136 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:02.711117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:23:02.711190 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:23:02.720645 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:02.722608 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:23:02.729426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:23:02.812989 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:23:02.821470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:23:02.891590 systemd-networkd[1148]: lo: Link UP Sep 3 23:23:02.891603 systemd-networkd[1148]: lo: Gained carrier Sep 3 23:23:02.895175 systemd-networkd[1148]: Enumeration completed Sep 3 23:23:02.896467 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:02.896475 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:23:02.905318 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:23:02.910560 systemd[1]: Reached target network.target - Network. Sep 3 23:23:02.916112 systemd-networkd[1148]: eth0: Link UP Sep 3 23:23:02.916124 systemd-networkd[1148]: eth0: Gained carrier Sep 3 23:23:02.916146 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:02.945607 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.24.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 3 23:23:03.237349 ignition[1078]: Ignition 2.21.0 Sep 3 23:23:03.237903 ignition[1078]: Stage: fetch-offline Sep 3 23:23:03.239243 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:03.239268 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:03.239846 ignition[1078]: Ignition finished successfully Sep 3 23:23:03.248309 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:03.254740 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 3 23:23:03.305601 ignition[1160]: Ignition 2.21.0 Sep 3 23:23:03.306087 ignition[1160]: Stage: fetch Sep 3 23:23:03.306647 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:03.306671 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:03.307426 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:03.327373 ignition[1160]: PUT result: OK Sep 3 23:23:03.331390 ignition[1160]: parsed url from cmdline: "" Sep 3 23:23:03.331407 ignition[1160]: no config URL provided Sep 3 23:23:03.331422 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:23:03.331446 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:23:03.331478 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:03.334170 ignition[1160]: PUT result: OK Sep 3 23:23:03.334254 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 3 23:23:03.341284 ignition[1160]: GET result: OK Sep 3 23:23:03.341465 ignition[1160]: parsing config with SHA512: 11ceeaf46d36d27f3523aaec16bceb6e4d730fc66d7c5e36f7256d5835d398bec7b93680c49bbbeb35c0b3b519e7641bd4e797d6016dfc4bf7d8936790c31dd4 Sep 3 23:23:03.352237 unknown[1160]: fetched base config from "system" Sep 3 23:23:03.352958 unknown[1160]: fetched base config from "system" Sep 3 23:23:03.353603 ignition[1160]: fetch: fetch complete Sep 3 23:23:03.352971 unknown[1160]: fetched user config from "aws" Sep 3 23:23:03.353615 ignition[1160]: fetch: fetch passed Sep 3 23:23:03.353704 ignition[1160]: Ignition finished successfully Sep 3 23:23:03.366847 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 3 23:23:03.373777 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:23:03.433056 ignition[1167]: Ignition 2.21.0 Sep 3 23:23:03.433088 ignition[1167]: Stage: kargs Sep 3 23:23:03.433729 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:03.434099 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:03.434905 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:03.442982 ignition[1167]: PUT result: OK Sep 3 23:23:03.446951 ignition[1167]: kargs: kargs passed Sep 3 23:23:03.447230 ignition[1167]: Ignition finished successfully Sep 3 23:23:03.456186 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:23:03.461129 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:23:03.502916 ignition[1173]: Ignition 2.21.0 Sep 3 23:23:03.502941 ignition[1173]: Stage: disks Sep 3 23:23:03.503420 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:03.503443 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:03.504121 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:03.509051 ignition[1173]: PUT result: OK Sep 3 23:23:03.517776 ignition[1173]: disks: disks passed Sep 3 23:23:03.518034 ignition[1173]: Ignition finished successfully Sep 3 23:23:03.524614 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:23:03.530078 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:03.530183 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:23:03.530510 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:03.531200 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:23:03.531896 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:23:03.546690 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:23:03.615784 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 3 23:23:03.620175 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:23:03.628270 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:23:03.759552 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:23:03.760997 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:23:03.764888 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:03.773814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:03.781231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:23:03.790149 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:23:03.790241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:23:03.790296 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:03.809402 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:23:03.815230 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:23:03.830960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Sep 3 23:23:03.834721 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:03.834756 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:03.843806 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:23:03.843877 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:23:03.847349 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:04.309581 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:23:04.320248 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:23:04.328343 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:23:04.336638 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:23:04.662755 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:04.669418 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:23:04.674986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:23:04.704176 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:23:04.706576 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:04.742257 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:23:04.756381 ignition[1313]: INFO : Ignition 2.21.0 Sep 3 23:23:04.758387 ignition[1313]: INFO : Stage: mount Sep 3 23:23:04.758387 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:04.758387 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:04.766581 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:04.769133 ignition[1313]: INFO : PUT result: OK Sep 3 23:23:04.777632 ignition[1313]: INFO : mount: mount passed Sep 3 23:23:04.780213 ignition[1313]: INFO : Ignition finished successfully Sep 3 23:23:04.784570 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:23:04.789412 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:23:04.797227 systemd-networkd[1148]: eth0: Gained IPv6LL Sep 3 23:23:04.824408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:23:04.863568 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Sep 3 23:23:04.867793 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:23:04.867836 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:23:04.875232 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 3 23:23:04.875284 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 3 23:23:04.879324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:23:04.939218 ignition[1342]: INFO : Ignition 2.21.0 Sep 3 23:23:04.941221 ignition[1342]: INFO : Stage: files Sep 3 23:23:04.941221 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:04.941221 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:04.941221 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:04.950653 ignition[1342]: INFO : PUT result: OK Sep 3 23:23:04.958018 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:23:04.972337 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:23:04.975426 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:23:04.992731 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:23:04.997735 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:23:05.001109 unknown[1342]: wrote ssh authorized keys file for user: core Sep 3 23:23:05.003624 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:23:05.007459 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 3 23:23:05.007459 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:05.144657 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:23:05.687772 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 3 23:23:05.692375 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:05.692375 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 3 23:23:05.939980 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 3 23:23:06.162020 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:06.166275 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:23:06.197398 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:06.201394 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:23:06.205374 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:23:06.213106 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:23:06.213106 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:23:06.223364 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 3 23:23:06.612909 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 3 23:23:06.984954 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:23:06.984954 ignition[1342]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 3 23:23:06.994118 ignition[1342]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:07.002720 ignition[1342]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:23:07.002720 ignition[1342]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 3 23:23:07.002720 ignition[1342]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:07.013300 ignition[1342]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:23:07.013300 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:07.013300 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:23:07.013300 ignition[1342]: INFO : files: files passed Sep 3 23:23:07.013300 ignition[1342]: INFO : Ignition finished successfully Sep 3 23:23:07.017617 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:23:07.037749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:23:07.046773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:23:07.073379 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:23:07.075873 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:23:07.087403 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:07.087403 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:07.097172 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:23:07.101420 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:07.107789 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:23:07.114783 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:23:07.182497 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:23:07.182932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:23:07.193061 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:23:07.196005 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:23:07.200123 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:23:07.204357 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:23:07.251808 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:07.258411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:23:07.311486 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:07.317451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:07.320346 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:23:07.327101 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:23:07.327518 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:23:07.335413 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:23:07.337922 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:23:07.344361 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:23:07.347289 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:23:07.352137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:23:07.359396 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:23:07.364280 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:23:07.367436 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:23:07.372199 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:23:07.377288 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:23:07.381444 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:23:07.386962 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:23:07.387366 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:23:07.394672 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:07.397577 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:07.401719 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:23:07.406398 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:07.409653 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:23:07.409882 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:23:07.419412 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:23:07.419856 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:23:07.427844 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:23:07.428241 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:23:07.436036 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:23:07.441690 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:23:07.446271 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:07.463863 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:23:07.476838 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:23:07.481040 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:07.484754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:23:07.485048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:23:07.508414 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:23:07.512565 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:23:07.521317 ignition[1396]: INFO : Ignition 2.21.0 Sep 3 23:23:07.521317 ignition[1396]: INFO : Stage: umount Sep 3 23:23:07.527576 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:23:07.527576 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 3 23:23:07.527576 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 3 23:23:07.537091 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:23:07.541510 ignition[1396]: INFO : PUT result: OK Sep 3 23:23:07.548151 ignition[1396]: INFO : umount: umount passed Sep 3 23:23:07.550140 ignition[1396]: INFO : Ignition finished successfully Sep 3 23:23:07.554768 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:23:07.557092 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:23:07.561567 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:23:07.561804 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:23:07.568069 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:23:07.568638 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:23:07.574494 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 3 23:23:07.574605 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 3 23:23:07.577183 systemd[1]: Stopped target network.target - Network. Sep 3 23:23:07.583085 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:23:07.583184 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:23:07.585997 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:23:07.589357 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:23:07.599788 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:07.603434 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:23:07.607178 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:23:07.614861 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:23:07.615363 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:23:07.618954 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:23:07.619272 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:23:07.622875 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:23:07.623174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:23:07.627206 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:23:07.627624 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:23:07.629147 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:23:07.636040 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:23:07.658102 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:23:07.658504 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:23:07.678444 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:23:07.679208 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:23:07.679292 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:07.707881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:23:07.708657 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:23:07.709121 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:23:07.726700 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:23:07.726888 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:23:07.735537 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:23:07.736627 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:23:07.758874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:23:07.758956 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:07.765638 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:23:07.765736 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:23:07.777880 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:23:07.805835 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:23:07.805949 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:23:07.811823 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:23:07.811912 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:07.832084 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:23:07.832199 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:07.834782 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:07.839237 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:23:07.865915 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:23:07.866304 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:07.876163 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:23:07.877128 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:07.879236 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:23:07.879864 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:07.888129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:23:07.888225 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:23:07.897686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:23:07.898215 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:23:07.902823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:23:07.902929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:23:07.915345 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:23:07.921668 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:23:07.921806 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:07.928148 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:23:07.928247 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:07.938984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:23:07.939094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:07.952139 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:23:07.952320 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:23:07.960176 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:23:07.961584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:23:07.963608 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:23:07.978729 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:23:08.015421 systemd[1]: Switching root. Sep 3 23:23:08.062236 systemd-journald[255]: Journal stopped Sep 3 23:23:10.588498 systemd-journald[255]: Received SIGTERM from PID 1 (systemd). Sep 3 23:23:10.595705 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:23:10.595766 kernel: SELinux: policy capability open_perms=1 Sep 3 23:23:10.595799 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:23:10.595833 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:23:10.595866 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:23:10.595898 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:23:10.595931 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:23:10.595959 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:23:10.595988 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:23:10.596020 kernel: audit: type=1403 audit(1756941788.545:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:23:10.596061 systemd[1]: Successfully loaded SELinux policy in 71.626ms. Sep 3 23:23:10.596110 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.718ms. Sep 3 23:23:10.596144 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:23:10.596176 systemd[1]: Detected virtualization amazon. Sep 3 23:23:10.596204 systemd[1]: Detected architecture arm64. Sep 3 23:23:10.596235 systemd[1]: Detected first boot. Sep 3 23:23:10.596264 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:23:10.596296 zram_generator::config[1439]: No configuration found. Sep 3 23:23:10.596329 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:23:10.596364 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:23:10.596397 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:23:10.596428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:23:10.596459 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:23:10.596490 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:23:10.596519 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:23:10.601635 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:23:10.601680 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:23:10.601714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:23:10.601746 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:23:10.601777 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:23:10.601806 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:23:10.601836 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:23:10.601864 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:23:10.601895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:23:10.601924 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:23:10.601969 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:23:10.602000 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:23:10.602030 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:23:10.602060 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 3 23:23:10.602091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:23:10.602127 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:23:10.602155 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:23:10.602187 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:23:10.602217 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:23:10.602247 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:23:10.602278 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:23:10.602309 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:23:10.602337 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:23:10.602365 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:23:10.602395 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:23:10.602423 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:23:10.602454 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:23:10.602484 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:23:10.602512 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:23:10.602568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:23:10.602599 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:23:10.602630 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:23:10.602659 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:23:10.602687 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:23:10.602715 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:23:10.602748 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:23:10.602779 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:23:10.602808 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:23:10.602839 systemd[1]: Reached target machines.target - Containers. Sep 3 23:23:10.602869 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:23:10.602897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:10.602926 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:23:10.602954 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:23:10.602981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:10.603014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:10.603043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:10.603071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:23:10.603101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:10.603129 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:23:10.603160 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:23:10.603188 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:23:10.603216 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:23:10.603248 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:23:10.603280 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:10.603320 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:23:10.603351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:23:10.603381 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:23:10.603418 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:23:10.603447 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:23:10.603477 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:23:10.603508 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:23:10.633810 systemd[1]: Stopped verity-setup.service. Sep 3 23:23:10.634190 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:23:10.634224 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:23:10.634257 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:23:10.634286 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:23:10.634325 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:23:10.634358 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:23:10.634387 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:23:10.634417 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:23:10.634449 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:23:10.634482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:10.634512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:10.638880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:10.638927 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:10.638961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:23:10.638993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:23:10.639021 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:23:10.639050 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:23:10.639081 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:23:10.639118 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:23:10.639148 kernel: fuse: init (API version 7.41) Sep 3 23:23:10.639175 kernel: loop: module loaded Sep 3 23:23:10.639204 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:23:10.639235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:10.639267 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:23:10.639298 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:10.639327 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:23:10.639361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:23:10.639440 systemd-journald[1518]: Collecting audit messages is disabled. Sep 3 23:23:10.639505 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:23:10.639561 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:23:10.639599 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:23:10.639633 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:10.639661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:10.639689 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:23:10.639717 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:23:10.639749 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:23:10.639777 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:23:10.639808 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:23:10.639838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:10.639872 systemd-journald[1518]: Journal started Sep 3 23:23:10.639917 systemd-journald[1518]: Runtime Journal (/run/log/journal/ec298ad2cbc415f3208e5897acda6643) is 8M, max 75.3M, 67.3M free. Sep 3 23:23:10.652605 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:23:09.882959 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:23:09.906113 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 3 23:23:09.906962 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:23:10.681069 kernel: ACPI: bus type drm_connector registered Sep 3 23:23:10.667924 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:23:10.674440 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:23:10.678496 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:10.679644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:10.707485 kernel: loop0: detected capacity change from 0 to 138376 Sep 3 23:23:10.717612 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:23:10.741781 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:23:10.760944 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:23:10.775378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:23:10.805723 systemd-journald[1518]: Time spent on flushing to /var/log/journal/ec298ad2cbc415f3208e5897acda6643 is 119.671ms for 935 entries. Sep 3 23:23:10.805723 systemd-journald[1518]: System Journal (/var/log/journal/ec298ad2cbc415f3208e5897acda6643) is 8M, max 195.6M, 187.6M free. Sep 3 23:23:10.935582 systemd-journald[1518]: Received client request to flush runtime journal. Sep 3 23:23:10.935797 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:23:10.829647 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:23:10.841061 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:23:10.941131 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:23:10.961888 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:23:10.964236 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:23:10.974866 kernel: loop1: detected capacity change from 0 to 107312 Sep 3 23:23:10.990144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:23:11.007628 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:23:11.014746 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:23:11.075135 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 3 23:23:11.076053 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Sep 3 23:23:11.089607 kernel: loop2: detected capacity change from 0 to 207008 Sep 3 23:23:11.106638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:23:11.326564 kernel: loop3: detected capacity change from 0 to 61240 Sep 3 23:23:11.442579 kernel: loop4: detected capacity change from 0 to 138376 Sep 3 23:23:11.472564 kernel: loop5: detected capacity change from 0 to 107312 Sep 3 23:23:11.494575 kernel: loop6: detected capacity change from 0 to 207008 Sep 3 23:23:11.536620 kernel: loop7: detected capacity change from 0 to 61240 Sep 3 23:23:11.548437 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 3 23:23:11.550929 (sd-merge)[1598]: Merged extensions into '/usr'. Sep 3 23:23:11.561866 systemd[1]: Reload requested from client PID 1546 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:23:11.562058 systemd[1]: Reloading... Sep 3 23:23:11.784609 zram_generator::config[1628]: No configuration found. Sep 3 23:23:12.019383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:12.204116 systemd[1]: Reloading finished in 641 ms. Sep 3 23:23:12.231000 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:23:12.234715 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:23:12.252759 systemd[1]: Starting ensure-sysext.service... Sep 3 23:23:12.259792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:23:12.273790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:23:12.306866 systemd[1]: Reload requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:23:12.306897 systemd[1]: Reloading... Sep 3 23:23:12.355678 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:23:12.355757 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:23:12.356344 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:23:12.361052 ldconfig[1539]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:23:12.361999 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:23:12.367052 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:23:12.369796 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Sep 3 23:23:12.369949 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Sep 3 23:23:12.380819 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:12.380847 systemd-tmpfiles[1678]: Skipping /boot Sep 3 23:23:12.389576 systemd-udevd[1679]: Using default interface naming scheme 'v255'. Sep 3 23:23:12.431730 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:23:12.431757 systemd-tmpfiles[1678]: Skipping /boot Sep 3 23:23:12.506572 zram_generator::config[1710]: No configuration found. Sep 3 23:23:12.826850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:12.866624 (udev-worker)[1730]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:23:13.058642 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 3 23:23:13.059750 systemd[1]: Reloading finished in 752 ms. Sep 3 23:23:13.074624 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:23:13.082851 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:23:13.113641 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:23:13.155630 systemd[1]: Finished ensure-sysext.service. Sep 3 23:23:13.162478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:13.169016 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:23:13.173946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:23:13.176956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:23:13.182974 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:23:13.189910 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:23:13.197754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:23:13.200315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:23:13.200409 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:23:13.204238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:23:13.212803 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:23:13.223121 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:23:13.225684 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:23:13.231067 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:23:13.278392 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:23:13.301598 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:23:13.387342 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:23:13.395188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:23:13.402794 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:23:13.415145 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:23:13.432017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:23:13.433706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:23:13.442518 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:23:13.443149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:23:13.447303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:23:13.447828 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:23:13.453136 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:23:13.453294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:23:13.491250 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:23:13.494416 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:23:13.508332 augenrules[1853]: No rules Sep 3 23:23:13.510995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:23:13.519039 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:13.519493 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:13.523779 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:23:13.725280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:23:13.808610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:23:13.915441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 3 23:23:13.926817 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:23:13.990111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:23:14.022377 systemd-networkd[1821]: lo: Link UP Sep 3 23:23:14.022398 systemd-networkd[1821]: lo: Gained carrier Sep 3 23:23:14.025292 systemd-networkd[1821]: Enumeration completed Sep 3 23:23:14.025542 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:23:14.030163 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:14.030187 systemd-networkd[1821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:23:14.031659 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:23:14.037930 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:23:14.045748 systemd-networkd[1821]: eth0: Link UP Sep 3 23:23:14.045959 systemd-resolved[1822]: Positive Trust Anchors: Sep 3 23:23:14.045981 systemd-resolved[1822]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:23:14.046024 systemd-networkd[1821]: eth0: Gained carrier Sep 3 23:23:14.046044 systemd-resolved[1822]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:23:14.046061 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:23:14.056643 systemd-networkd[1821]: eth0: DHCPv4 address 172.31.24.220/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 3 23:23:14.072848 systemd-resolved[1822]: Defaulting to hostname 'linux'. Sep 3 23:23:14.076290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:23:14.079212 systemd[1]: Reached target network.target - Network. Sep 3 23:23:14.081226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:23:14.083813 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:23:14.086404 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:23:14.089116 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:23:14.092325 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:23:14.094878 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:23:14.097641 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:23:14.100339 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:23:14.100391 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:23:14.102428 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:23:14.108678 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:23:14.115498 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:23:14.122410 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:23:14.125820 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:23:14.128406 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:23:14.138848 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:23:14.142083 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:23:14.146335 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:23:14.149326 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:23:14.154705 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:23:14.157006 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:23:14.159738 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:23:14.159945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:23:14.162065 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:23:14.167848 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 3 23:23:14.174955 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:23:14.185783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:23:14.194581 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:23:14.199263 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:23:14.202819 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:23:14.212920 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:23:14.221940 systemd[1]: Started ntpd.service - Network Time Service. Sep 3 23:23:14.228835 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:23:14.237914 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 3 23:23:14.249779 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:23:14.261036 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:23:14.275639 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:23:14.277927 jq[1966]: false Sep 3 23:23:14.280026 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:23:14.282310 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:23:14.285640 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:23:14.292914 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:23:14.313558 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:23:14.317240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:23:14.318256 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:23:14.342248 extend-filesystems[1967]: Found /dev/nvme0n1p6 Sep 3 23:23:14.364214 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:23:14.369618 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:23:14.375891 jq[1978]: true Sep 3 23:23:14.391979 extend-filesystems[1967]: Found /dev/nvme0n1p9 Sep 3 23:23:14.399740 extend-filesystems[1967]: Checking size of /dev/nvme0n1p9 Sep 3 23:23:14.429552 tar[1983]: linux-arm64/LICENSE Sep 3 23:23:14.429552 tar[1983]: linux-arm64/helm Sep 3 23:23:14.438095 jq[1997]: true Sep 3 23:23:14.485887 update_engine[1977]: I20250903 23:23:14.485509 1977 main.cc:92] Flatcar Update Engine starting Sep 3 23:23:14.496180 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:23:14.501637 extend-filesystems[1967]: Resized partition /dev/nvme0n1p9 Sep 3 23:23:14.498604 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:23:14.515268 extend-filesystems[2016]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:23:14.526156 (ntainerd)[2010]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:23:14.546862 dbus-daemon[1964]: [system] SELinux support is enabled Sep 3 23:23:14.547170 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:23:14.563564 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 3 23:23:14.555959 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:23:14.556014 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:23:14.559044 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:23:14.559080 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:23:14.583134 dbus-daemon[1964]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1821 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 3 23:23:14.595886 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:32:01 UTC 2025 (1): Starting Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: ---------------------------------------------------- Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: corporation. Support and training for ntp-4 are Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: available at https://www.nwtime.org/support Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: ---------------------------------------------------- Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: proto: precision = 0.096 usec (-23) Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: basedate set to 2025-08-22 Sep 3 23:23:14.658640 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: gps base set to 2025-08-24 (week 2381) Sep 3 23:23:14.672465 update_engine[1977]: I20250903 23:23:14.609833 1977 update_check_scheduler.cc:74] Next update check in 7m16s Sep 3 23:23:14.643955 ntpd[1969]: ntpd 4.2.8p17@1.4004-o Wed Sep 3 21:32:01 UTC 2025 (1): Starting Sep 3 23:23:14.672885 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 3 23:23:14.659400 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 3 23:23:14.644000 ntpd[1969]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 3 23:23:14.661855 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:23:14.644018 ntpd[1969]: ---------------------------------------------------- Sep 3 23:23:14.644035 ntpd[1969]: ntp-4 is maintained by Network Time Foundation, Sep 3 23:23:14.644051 ntpd[1969]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 3 23:23:14.644067 ntpd[1969]: corporation. Support and training for ntp-4 are Sep 3 23:23:14.644082 ntpd[1969]: available at https://www.nwtime.org/support Sep 3 23:23:14.708863 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Listen normally on 3 eth0 172.31.24.220:123 Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Listen normally on 4 lo [::1]:123 Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: bind(21) AF_INET6 fe80::4d5:16ff:feb8:92bd%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: unable to create socket on eth0 (5) for fe80::4d5:16ff:feb8:92bd%2#123 Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: failed to init interface for address fe80::4d5:16ff:feb8:92bd%2 Sep 3 23:23:14.719734 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: Listening on routing socket on fd #21 for interface updates Sep 3 23:23:14.644099 ntpd[1969]: ---------------------------------------------------- Sep 3 23:23:14.712467 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 3 23:23:14.720230 extend-filesystems[2016]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 3 23:23:14.720230 extend-filesystems[2016]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 3 23:23:14.720230 extend-filesystems[2016]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 3 23:23:14.652700 ntpd[1969]: proto: precision = 0.096 usec (-23) Sep 3 23:23:14.730839 extend-filesystems[1967]: Resized filesystem in /dev/nvme0n1p9 Sep 3 23:23:14.653086 ntpd[1969]: basedate set to 2025-08-22 Sep 3 23:23:14.653106 ntpd[1969]: gps base set to 2025-08-24 (week 2381) Sep 3 23:23:14.678383 ntpd[1969]: Listen and drop on 0 v6wildcard [::]:123 Sep 3 23:23:14.678459 ntpd[1969]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 3 23:23:14.682648 ntpd[1969]: Listen normally on 2 lo 127.0.0.1:123 Sep 3 23:23:14.682742 ntpd[1969]: Listen normally on 3 eth0 172.31.24.220:123 Sep 3 23:23:14.682824 ntpd[1969]: Listen normally on 4 lo [::1]:123 Sep 3 23:23:14.684260 ntpd[1969]: bind(21) AF_INET6 fe80::4d5:16ff:feb8:92bd%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:14.684306 ntpd[1969]: unable to create socket on eth0 (5) for fe80::4d5:16ff:feb8:92bd%2#123 Sep 3 23:23:14.684331 ntpd[1969]: failed to init interface for address fe80::4d5:16ff:feb8:92bd%2 Sep 3 23:23:14.684387 ntpd[1969]: Listening on routing socket on fd #21 for interface updates Sep 3 23:23:14.739117 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:23:14.740085 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:23:14.747170 bash[2031]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:23:14.750597 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:23:14.761721 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:14.761796 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:14.761944 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:14.761944 ntpd[1969]: 3 Sep 23:23:14 ntpd[1969]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 3 23:23:14.764011 systemd[1]: Starting sshkeys.service... Sep 3 23:23:14.866340 coreos-metadata[1963]: Sep 03 23:23:14.866 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 3 23:23:14.877760 coreos-metadata[1963]: Sep 03 23:23:14.875 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 3 23:23:14.877475 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 3 23:23:14.883417 coreos-metadata[1963]: Sep 03 23:23:14.883 INFO Fetch successful Sep 3 23:23:14.883602 coreos-metadata[1963]: Sep 03 23:23:14.883 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 3 23:23:14.884627 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 3 23:23:14.889856 coreos-metadata[1963]: Sep 03 23:23:14.889 INFO Fetch successful Sep 3 23:23:14.895770 coreos-metadata[1963]: Sep 03 23:23:14.895 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 3 23:23:14.897728 coreos-metadata[1963]: Sep 03 23:23:14.897 INFO Fetch successful Sep 3 23:23:14.898545 coreos-metadata[1963]: Sep 03 23:23:14.897 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 3 23:23:14.900282 coreos-metadata[1963]: Sep 03 23:23:14.900 INFO Fetch successful Sep 3 23:23:14.902515 coreos-metadata[1963]: Sep 03 23:23:14.902 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 3 23:23:14.907546 coreos-metadata[1963]: Sep 03 23:23:14.905 INFO Fetch failed with 404: resource not found Sep 3 23:23:14.907546 coreos-metadata[1963]: Sep 03 23:23:14.906 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 3 23:23:14.909396 coreos-metadata[1963]: Sep 03 23:23:14.909 INFO Fetch successful Sep 3 23:23:14.909396 coreos-metadata[1963]: Sep 03 23:23:14.909 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 3 23:23:14.913217 coreos-metadata[1963]: Sep 03 23:23:14.913 INFO Fetch successful Sep 3 23:23:14.913323 coreos-metadata[1963]: Sep 03 23:23:14.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 3 23:23:14.917547 coreos-metadata[1963]: Sep 03 23:23:14.917 INFO Fetch successful Sep 3 23:23:14.917547 coreos-metadata[1963]: Sep 03 23:23:14.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 3 23:23:14.925545 coreos-metadata[1963]: Sep 03 23:23:14.924 INFO Fetch successful Sep 3 23:23:14.925545 coreos-metadata[1963]: Sep 03 23:23:14.924 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 3 23:23:14.926200 coreos-metadata[1963]: Sep 03 23:23:14.926 INFO Fetch successful Sep 3 23:23:14.999570 systemd-logind[1975]: Watching system buttons on /dev/input/event0 (Power Button) Sep 3 23:23:14.999680 systemd-logind[1975]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 3 23:23:15.002764 systemd-logind[1975]: New seat seat0. Sep 3 23:23:15.026966 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:23:15.111465 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:23:15.184347 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 3 23:23:15.187706 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:23:15.371735 containerd[2010]: time="2025-09-03T23:23:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:23:15.380394 containerd[2010]: time="2025-09-03T23:23:15.380299115Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:23:15.398216 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 3 23:23:15.406284 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 3 23:23:15.411721 dbus-daemon[1964]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2042 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 3 23:23:15.421566 coreos-metadata[2059]: Sep 03 23:23:15.421 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 3 23:23:15.424518 systemd[1]: Starting polkit.service - Authorization Manager... Sep 3 23:23:15.428940 coreos-metadata[2059]: Sep 03 23:23:15.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 3 23:23:15.431273 coreos-metadata[2059]: Sep 03 23:23:15.431 INFO Fetch successful Sep 3 23:23:15.431405 coreos-metadata[2059]: Sep 03 23:23:15.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 3 23:23:15.432217 coreos-metadata[2059]: Sep 03 23:23:15.432 INFO Fetch successful Sep 3 23:23:15.439659 unknown[2059]: wrote ssh authorized keys file for user: core Sep 3 23:23:15.497309 containerd[2010]: time="2025-09-03T23:23:15.496473540Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.032µs" Sep 3 23:23:15.502798 containerd[2010]: time="2025-09-03T23:23:15.502715640Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:23:15.502908 containerd[2010]: time="2025-09-03T23:23:15.502802856Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:23:15.503568 containerd[2010]: time="2025-09-03T23:23:15.503103816Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:23:15.503568 containerd[2010]: time="2025-09-03T23:23:15.503156160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:23:15.503568 containerd[2010]: time="2025-09-03T23:23:15.503212020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:23:15.503568 containerd[2010]: time="2025-09-03T23:23:15.503330076Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:23:15.503568 containerd[2010]: time="2025-09-03T23:23:15.503356392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:23:15.509562 containerd[2010]: time="2025-09-03T23:23:15.507847092Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:23:15.509562 containerd[2010]: time="2025-09-03T23:23:15.507913380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:23:15.509562 containerd[2010]: time="2025-09-03T23:23:15.507948180Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:23:15.509562 containerd[2010]: time="2025-09-03T23:23:15.507973104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:23:15.514547 containerd[2010]: time="2025-09-03T23:23:15.513192096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:23:15.517941 containerd[2010]: time="2025-09-03T23:23:15.517873548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:23:15.518052 containerd[2010]: time="2025-09-03T23:23:15.517973880Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:23:15.518052 containerd[2010]: time="2025-09-03T23:23:15.518003184Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:23:15.518156 containerd[2010]: time="2025-09-03T23:23:15.518075376Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:23:15.519544 containerd[2010]: time="2025-09-03T23:23:15.518694540Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:23:15.528219 containerd[2010]: time="2025-09-03T23:23:15.525736308Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:23:15.531835 update-ssh-keys[2137]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:23:15.534741 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 3 23:23:15.548947 systemd[1]: Finished sshkeys.service. Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552477360Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552599040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552637152Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552666840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552697344Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552724740Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552754188Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552783000Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552814500Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552841596Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.552867768Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.553022112Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.553282680Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:23:15.555636 containerd[2010]: time="2025-09-03T23:23:15.553323168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553357584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553385208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553418640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553469820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553499856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553546104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553578444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553605984Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.553633164Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.554014872Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.554047848Z" level=info msg="Start snapshots syncer" Sep 3 23:23:15.556213 containerd[2010]: time="2025-09-03T23:23:15.554096580Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:23:15.556730 containerd[2010]: time="2025-09-03T23:23:15.554456592Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.560992500Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.561280680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.561643356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.561717000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.561748008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.563581992Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.563647320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.563684256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.563737908Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:23:15.563906 containerd[2010]: time="2025-09-03T23:23:15.563835540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:23:15.564482 containerd[2010]: time="2025-09-03T23:23:15.563867856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:23:15.564482 containerd[2010]: time="2025-09-03T23:23:15.564435192Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:23:15.564904 containerd[2010]: time="2025-09-03T23:23:15.564672672Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:23:15.565420 containerd[2010]: time="2025-09-03T23:23:15.565028580Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.569785896Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.569882172Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.569907192Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.569962704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.570033792Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.570231324Z" level=info msg="runtime interface created" Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.570249012Z" level=info msg="created NRI interface" Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.570277332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.570332124Z" level=info msg="Connect containerd service" Sep 3 23:23:15.570723 containerd[2010]: time="2025-09-03T23:23:15.570431796Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:23:15.572635 containerd[2010]: time="2025-09-03T23:23:15.572562024Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:23:15.649422 ntpd[1969]: bind(24) AF_INET6 fe80::4d5:16ff:feb8:92bd%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:15.650144 ntpd[1969]: 3 Sep 23:23:15 ntpd[1969]: bind(24) AF_INET6 fe80::4d5:16ff:feb8:92bd%2#123 flags 0x11 failed: Cannot assign requested address Sep 3 23:23:15.650144 ntpd[1969]: 3 Sep 23:23:15 ntpd[1969]: unable to create socket on eth0 (6) for fe80::4d5:16ff:feb8:92bd%2#123 Sep 3 23:23:15.650144 ntpd[1969]: 3 Sep 23:23:15 ntpd[1969]: failed to init interface for address fe80::4d5:16ff:feb8:92bd%2 Sep 3 23:23:15.649515 ntpd[1969]: unable to create socket on eth0 (6) for fe80::4d5:16ff:feb8:92bd%2#123 Sep 3 23:23:15.649567 ntpd[1969]: failed to init interface for address fe80::4d5:16ff:feb8:92bd%2 Sep 3 23:23:15.672779 systemd-networkd[1821]: eth0: Gained IPv6LL Sep 3 23:23:15.685623 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:23:15.689299 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:23:15.696219 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 3 23:23:15.708276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:15.715299 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:23:15.847258 sshd_keygen[2006]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:23:15.957949 containerd[2010]: time="2025-09-03T23:23:15.957809642Z" level=info msg="Start subscribing containerd event" Sep 3 23:23:15.957949 containerd[2010]: time="2025-09-03T23:23:15.957905726Z" level=info msg="Start recovering state" Sep 3 23:23:15.958116 containerd[2010]: time="2025-09-03T23:23:15.958034066Z" level=info msg="Start event monitor" Sep 3 23:23:15.958116 containerd[2010]: time="2025-09-03T23:23:15.958064078Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:23:15.958116 containerd[2010]: time="2025-09-03T23:23:15.958082474Z" level=info msg="Start streaming server" Sep 3 23:23:15.958116 containerd[2010]: time="2025-09-03T23:23:15.958100990Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:23:15.958286 containerd[2010]: time="2025-09-03T23:23:15.958117646Z" level=info msg="runtime interface starting up..." Sep 3 23:23:15.958286 containerd[2010]: time="2025-09-03T23:23:15.958136294Z" level=info msg="starting plugins..." Sep 3 23:23:15.958286 containerd[2010]: time="2025-09-03T23:23:15.958164866Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:23:15.962331 containerd[2010]: time="2025-09-03T23:23:15.959099522Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:23:15.962331 containerd[2010]: time="2025-09-03T23:23:15.959219906Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:23:15.962331 containerd[2010]: time="2025-09-03T23:23:15.959327978Z" level=info msg="containerd successfully booted in 0.588302s" Sep 3 23:23:15.959442 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:23:16.001598 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:23:16.004861 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:23:16.020961 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:23:16.024758 locksmithd[2043]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:23:16.026316 systemd[1]: Started sshd@0-172.31.24.220:22-139.178.89.65:57480.service - OpenSSH per-connection server daemon (139.178.89.65:57480). Sep 3 23:23:16.072120 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:23:16.074022 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:23:16.090941 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:23:16.149561 amazon-ssm-agent[2158]: Initializing new seelog logger Sep 3 23:23:16.149561 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Sep 3 23:23:16.149561 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.149561 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.150915 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 processing appconfig overrides Sep 3 23:23:16.151764 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.152633 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.152633 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 processing appconfig overrides Sep 3 23:23:16.152633 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.152633 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.154340 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 processing appconfig overrides Sep 3 23:23:16.155376 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.1516 INFO Proxy environment variables: Sep 3 23:23:16.158285 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.158412 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:16.158648 amazon-ssm-agent[2158]: 2025/09/03 23:23:16 processing appconfig overrides Sep 3 23:23:16.213643 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:23:16.226003 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:23:16.233162 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 3 23:23:16.239288 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:23:16.268561 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.1517 INFO https_proxy: Sep 3 23:23:16.337543 polkitd[2134]: Started polkitd version 126 Sep 3 23:23:16.353458 polkitd[2134]: Loading rules from directory /etc/polkit-1/rules.d Sep 3 23:23:16.355411 polkitd[2134]: Loading rules from directory /run/polkit-1/rules.d Sep 3 23:23:16.355499 polkitd[2134]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:23:16.356151 polkitd[2134]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 3 23:23:16.356202 polkitd[2134]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 3 23:23:16.356279 polkitd[2134]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 3 23:23:16.359100 polkitd[2134]: Finished loading, compiling and executing 2 rules Sep 3 23:23:16.359650 systemd[1]: Started polkit.service - Authorization Manager. Sep 3 23:23:16.365989 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 3 23:23:16.367769 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.1517 INFO http_proxy: Sep 3 23:23:16.369839 polkitd[2134]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 3 23:23:16.411455 systemd-hostnamed[2042]: Hostname set to (transient) Sep 3 23:23:16.411661 systemd-resolved[1822]: System hostname changed to 'ip-172-31-24-220'. Sep 3 23:23:16.435016 sshd[2202]: Accepted publickey for core from 139.178.89.65 port 57480 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:16.437418 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:16.456978 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:23:16.462264 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:23:16.468092 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.1517 INFO no_proxy: Sep 3 23:23:16.492603 systemd-logind[1975]: New session 1 of user core. Sep 3 23:23:16.523111 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:23:16.537013 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:23:16.560690 (systemd)[2225]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:23:16.567797 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.1520 INFO Checking if agent identity type OnPrem can be assumed Sep 3 23:23:16.570378 systemd-logind[1975]: New session c1 of user core. Sep 3 23:23:16.669122 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.1520 INFO Checking if agent identity type EC2 can be assumed Sep 3 23:23:16.764678 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3045 INFO Agent will take identity from EC2 Sep 3 23:23:16.863460 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3092 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 3 23:23:16.912939 systemd[2225]: Queued start job for default target default.target. Sep 3 23:23:16.919907 systemd[2225]: Created slice app.slice - User Application Slice. Sep 3 23:23:16.919976 systemd[2225]: Reached target paths.target - Paths. Sep 3 23:23:16.920351 systemd[2225]: Reached target timers.target - Timers. Sep 3 23:23:16.925728 systemd[2225]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:23:16.964621 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3092 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 3 23:23:16.972824 systemd[2225]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:23:16.973070 systemd[2225]: Reached target sockets.target - Sockets. Sep 3 23:23:16.973352 systemd[2225]: Reached target basic.target - Basic System. Sep 3 23:23:16.973494 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:23:16.973762 systemd[2225]: Reached target default.target - Main User Target. Sep 3 23:23:16.973824 systemd[2225]: Startup finished in 382ms. Sep 3 23:23:16.986857 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:23:17.009967 tar[1983]: linux-arm64/README.md Sep 3 23:23:17.032470 amazon-ssm-agent[2158]: 2025/09/03 23:23:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.032470 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 3 23:23:17.032470 amazon-ssm-agent[2158]: 2025/09/03 23:23:17 processing appconfig overrides Sep 3 23:23:17.035892 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:23:17.064395 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3092 INFO [amazon-ssm-agent] Starting Core Agent Sep 3 23:23:17.070942 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3092 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3092 INFO [Registrar] Starting registrar module Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3199 INFO [EC2Identity] Checking disk for registration info Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3200 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.3200 INFO [EC2Identity] Generating registration keypair Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.9620 INFO [EC2Identity] Checking write access before registering Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:16.9649 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:17.0317 INFO [EC2Identity] EC2 registration was successful. Sep 3 23:23:17.071055 amazon-ssm-agent[2158]: 2025-09-03 23:23:17.0317 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 3 23:23:17.071496 amazon-ssm-agent[2158]: 2025-09-03 23:23:17.0318 INFO [CredentialRefresher] credentialRefresher has started Sep 3 23:23:17.071496 amazon-ssm-agent[2158]: 2025-09-03 23:23:17.0318 INFO [CredentialRefresher] Starting credentials refresher loop Sep 3 23:23:17.071496 amazon-ssm-agent[2158]: 2025-09-03 23:23:17.0705 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 3 23:23:17.071496 amazon-ssm-agent[2158]: 2025-09-03 23:23:17.0708 INFO [CredentialRefresher] Credentials ready Sep 3 23:23:17.149047 systemd[1]: Started sshd@1-172.31.24.220:22-139.178.89.65:57494.service - OpenSSH per-connection server daemon (139.178.89.65:57494). Sep 3 23:23:17.162924 amazon-ssm-agent[2158]: 2025-09-03 23:23:17.0711 INFO [CredentialRefresher] Next credential rotation will be in 29.9999903801 minutes Sep 3 23:23:17.349727 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 57494 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:17.352085 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:17.361609 systemd-logind[1975]: New session 2 of user core. Sep 3 23:23:17.370772 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:23:17.502293 sshd[2241]: Connection closed by 139.178.89.65 port 57494 Sep 3 23:23:17.503140 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:17.508878 systemd-logind[1975]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:23:17.510019 systemd[1]: sshd@1-172.31.24.220:22-139.178.89.65:57494.service: Deactivated successfully. Sep 3 23:23:17.514322 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:23:17.519997 systemd-logind[1975]: Removed session 2. Sep 3 23:23:17.538358 systemd[1]: Started sshd@2-172.31.24.220:22-139.178.89.65:57510.service - OpenSSH per-connection server daemon (139.178.89.65:57510). Sep 3 23:23:17.731556 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 57510 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:17.734436 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:17.741730 systemd-logind[1975]: New session 3 of user core. Sep 3 23:23:17.750762 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:23:17.880764 sshd[2249]: Connection closed by 139.178.89.65 port 57510 Sep 3 23:23:17.881614 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:17.886881 systemd[1]: sshd@2-172.31.24.220:22-139.178.89.65:57510.service: Deactivated successfully. Sep 3 23:23:17.891442 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:23:17.893146 systemd-logind[1975]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:23:17.898364 systemd-logind[1975]: Removed session 3. Sep 3 23:23:18.100402 amazon-ssm-agent[2158]: 2025-09-03 23:23:18.1001 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 3 23:23:18.201735 amazon-ssm-agent[2158]: 2025-09-03 23:23:18.1041 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2256) started Sep 3 23:23:18.302613 amazon-ssm-agent[2158]: 2025-09-03 23:23:18.1042 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 3 23:23:18.649397 ntpd[1969]: Listen normally on 7 eth0 [fe80::4d5:16ff:feb8:92bd%2]:123 Sep 3 23:23:18.650006 ntpd[1969]: 3 Sep 23:23:18 ntpd[1969]: Listen normally on 7 eth0 [fe80::4d5:16ff:feb8:92bd%2]:123 Sep 3 23:23:19.346151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:19.349402 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:23:19.354268 systemd[1]: Startup finished in 3.718s (kernel) + 9.852s (initrd) + 10.878s (userspace) = 24.449s. Sep 3 23:23:19.371106 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:20.720546 kubelet[2273]: E0903 23:23:20.720431 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:20.724847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:20.725167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:20.726686 systemd[1]: kubelet.service: Consumed 1.423s CPU time, 257.5M memory peak. Sep 3 23:23:21.246012 systemd-resolved[1822]: Clock change detected. Flushing caches. Sep 3 23:23:27.514606 systemd[1]: Started sshd@3-172.31.24.220:22-139.178.89.65:44958.service - OpenSSH per-connection server daemon (139.178.89.65:44958). Sep 3 23:23:27.709145 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 44958 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:27.711614 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:27.719398 systemd-logind[1975]: New session 4 of user core. Sep 3 23:23:27.731142 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:23:27.856015 sshd[2287]: Connection closed by 139.178.89.65 port 44958 Sep 3 23:23:27.855502 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:27.861233 systemd[1]: sshd@3-172.31.24.220:22-139.178.89.65:44958.service: Deactivated successfully. Sep 3 23:23:27.864612 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:23:27.870483 systemd-logind[1975]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:23:27.872568 systemd-logind[1975]: Removed session 4. Sep 3 23:23:27.890702 systemd[1]: Started sshd@4-172.31.24.220:22-139.178.89.65:44968.service - OpenSSH per-connection server daemon (139.178.89.65:44968). Sep 3 23:23:28.093583 sshd[2293]: Accepted publickey for core from 139.178.89.65 port 44968 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:28.096045 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:28.103763 systemd-logind[1975]: New session 5 of user core. Sep 3 23:23:28.121165 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:23:28.236288 sshd[2295]: Connection closed by 139.178.89.65 port 44968 Sep 3 23:23:28.237075 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:28.244293 systemd[1]: sshd@4-172.31.24.220:22-139.178.89.65:44968.service: Deactivated successfully. Sep 3 23:23:28.247851 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:23:28.249757 systemd-logind[1975]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:23:28.252834 systemd-logind[1975]: Removed session 5. Sep 3 23:23:28.272658 systemd[1]: Started sshd@5-172.31.24.220:22-139.178.89.65:44972.service - OpenSSH per-connection server daemon (139.178.89.65:44972). Sep 3 23:23:28.465465 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 44972 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:28.467670 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:28.476985 systemd-logind[1975]: New session 6 of user core. Sep 3 23:23:28.486161 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:23:28.608956 sshd[2303]: Connection closed by 139.178.89.65 port 44972 Sep 3 23:23:28.609727 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:28.616914 systemd[1]: sshd@5-172.31.24.220:22-139.178.89.65:44972.service: Deactivated successfully. Sep 3 23:23:28.619790 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:23:28.621453 systemd-logind[1975]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:23:28.624541 systemd-logind[1975]: Removed session 6. Sep 3 23:23:28.646077 systemd[1]: Started sshd@6-172.31.24.220:22-139.178.89.65:44988.service - OpenSSH per-connection server daemon (139.178.89.65:44988). Sep 3 23:23:28.838277 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 44988 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:28.840303 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:28.848236 systemd-logind[1975]: New session 7 of user core. Sep 3 23:23:28.869157 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:23:29.015329 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:23:29.016421 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:29.033377 sudo[2312]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:29.056927 sshd[2311]: Connection closed by 139.178.89.65 port 44988 Sep 3 23:23:29.057017 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:29.064832 systemd[1]: sshd@6-172.31.24.220:22-139.178.89.65:44988.service: Deactivated successfully. Sep 3 23:23:29.067871 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:23:29.071111 systemd-logind[1975]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:23:29.074495 systemd-logind[1975]: Removed session 7. Sep 3 23:23:29.095356 systemd[1]: Started sshd@7-172.31.24.220:22-139.178.89.65:45000.service - OpenSSH per-connection server daemon (139.178.89.65:45000). Sep 3 23:23:29.301736 sshd[2318]: Accepted publickey for core from 139.178.89.65 port 45000 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:29.304238 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:29.312297 systemd-logind[1975]: New session 8 of user core. Sep 3 23:23:29.320143 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:23:29.424880 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:23:29.426059 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:29.434646 sudo[2322]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:29.444391 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:23:29.445158 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:29.460752 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:23:29.522448 augenrules[2344]: No rules Sep 3 23:23:29.525034 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:23:29.525473 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:23:29.527877 sudo[2321]: pam_unix(sudo:session): session closed for user root Sep 3 23:23:29.551639 sshd[2320]: Connection closed by 139.178.89.65 port 45000 Sep 3 23:23:29.552413 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:29.559921 systemd[1]: sshd@7-172.31.24.220:22-139.178.89.65:45000.service: Deactivated successfully. Sep 3 23:23:29.564540 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:23:29.566278 systemd-logind[1975]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:23:29.569487 systemd-logind[1975]: Removed session 8. Sep 3 23:23:29.589624 systemd[1]: Started sshd@8-172.31.24.220:22-139.178.89.65:45006.service - OpenSSH per-connection server daemon (139.178.89.65:45006). Sep 3 23:23:29.792506 sshd[2353]: Accepted publickey for core from 139.178.89.65 port 45006 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:23:29.795033 sshd-session[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:29.802994 systemd-logind[1975]: New session 9 of user core. Sep 3 23:23:29.813117 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:23:29.917647 sudo[2356]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:23:29.918353 sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:23:30.370923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:23:30.376088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:30.643040 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:23:30.655522 (dockerd)[2376]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:23:31.198153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:31.210654 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:31.297730 kubelet[2386]: E0903 23:23:31.297536 2386 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:31.306380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:31.306686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:31.308196 systemd[1]: kubelet.service: Consumed 318ms CPU time, 106.4M memory peak. Sep 3 23:23:31.341750 dockerd[2376]: time="2025-09-03T23:23:31.341643174Z" level=info msg="Starting up" Sep 3 23:23:31.344558 dockerd[2376]: time="2025-09-03T23:23:31.344474850Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:23:31.399193 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport723792046-merged.mount: Deactivated successfully. Sep 3 23:23:31.422108 dockerd[2376]: time="2025-09-03T23:23:31.422035554Z" level=info msg="Loading containers: start." Sep 3 23:23:31.437929 kernel: Initializing XFRM netlink socket Sep 3 23:23:31.793670 (udev-worker)[2410]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:23:31.865239 systemd-networkd[1821]: docker0: Link UP Sep 3 23:23:31.870112 dockerd[2376]: time="2025-09-03T23:23:31.870034376Z" level=info msg="Loading containers: done." Sep 3 23:23:31.895453 dockerd[2376]: time="2025-09-03T23:23:31.895375964Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:23:31.895649 dockerd[2376]: time="2025-09-03T23:23:31.895505696Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:23:31.895720 dockerd[2376]: time="2025-09-03T23:23:31.895687508Z" level=info msg="Initializing buildkit" Sep 3 23:23:31.896417 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3935284032-merged.mount: Deactivated successfully. Sep 3 23:23:31.935473 dockerd[2376]: time="2025-09-03T23:23:31.935396025Z" level=info msg="Completed buildkit initialization" Sep 3 23:23:31.952319 dockerd[2376]: time="2025-09-03T23:23:31.952221681Z" level=info msg="Daemon has completed initialization" Sep 3 23:23:31.953117 dockerd[2376]: time="2025-09-03T23:23:31.952506213Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:23:31.953248 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:23:33.370839 containerd[2010]: time="2025-09-03T23:23:33.370226312Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 3 23:23:33.913478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692149231.mount: Deactivated successfully. Sep 3 23:23:35.397780 containerd[2010]: time="2025-09-03T23:23:35.397652326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:35.402963 containerd[2010]: time="2025-09-03T23:23:35.400983058Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 3 23:23:35.403135 containerd[2010]: time="2025-09-03T23:23:35.401312290Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:35.407369 containerd[2010]: time="2025-09-03T23:23:35.407324662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:35.411936 containerd[2010]: time="2025-09-03T23:23:35.410342638Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.040058414s" Sep 3 23:23:35.411936 containerd[2010]: time="2025-09-03T23:23:35.410852830Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 3 23:23:35.414628 containerd[2010]: time="2025-09-03T23:23:35.414576202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 3 23:23:36.886907 containerd[2010]: time="2025-09-03T23:23:36.886824589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:36.888516 containerd[2010]: time="2025-09-03T23:23:36.888462745Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 3 23:23:36.890304 containerd[2010]: time="2025-09-03T23:23:36.889462405Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:36.895164 containerd[2010]: time="2025-09-03T23:23:36.895100941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:36.897110 containerd[2010]: time="2025-09-03T23:23:36.897065341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.482246619s" Sep 3 23:23:36.897261 containerd[2010]: time="2025-09-03T23:23:36.897234481Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 3 23:23:36.898054 containerd[2010]: time="2025-09-03T23:23:36.898006549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 3 23:23:38.068068 containerd[2010]: time="2025-09-03T23:23:38.067982867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:38.069933 containerd[2010]: time="2025-09-03T23:23:38.069689927Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 3 23:23:38.071047 containerd[2010]: time="2025-09-03T23:23:38.070977899Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:38.075487 containerd[2010]: time="2025-09-03T23:23:38.075442643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:38.077714 containerd[2010]: time="2025-09-03T23:23:38.077376551Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.179313926s" Sep 3 23:23:38.077714 containerd[2010]: time="2025-09-03T23:23:38.077431511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 3 23:23:38.078235 containerd[2010]: time="2025-09-03T23:23:38.078201443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 3 23:23:39.325949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862592699.mount: Deactivated successfully. Sep 3 23:23:39.928693 containerd[2010]: time="2025-09-03T23:23:39.928614628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:39.930697 containerd[2010]: time="2025-09-03T23:23:39.930635008Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 3 23:23:39.933313 containerd[2010]: time="2025-09-03T23:23:39.933268816Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:39.939076 containerd[2010]: time="2025-09-03T23:23:39.939012640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:39.940563 containerd[2010]: time="2025-09-03T23:23:39.940401376Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.861420389s" Sep 3 23:23:39.940563 containerd[2010]: time="2025-09-03T23:23:39.940452712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 3 23:23:39.941792 containerd[2010]: time="2025-09-03T23:23:39.941749972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 3 23:23:40.488520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187205769.mount: Deactivated successfully. Sep 3 23:23:41.369780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:23:41.374929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:41.748196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:41.760522 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:23:41.866225 kubelet[2720]: E0903 23:23:41.866168 2720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:23:41.873328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:23:41.873652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:23:41.875431 systemd[1]: kubelet.service: Consumed 325ms CPU time, 107M memory peak. Sep 3 23:23:41.947840 containerd[2010]: time="2025-09-03T23:23:41.947760210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:41.950696 containerd[2010]: time="2025-09-03T23:23:41.950624526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 3 23:23:41.953557 containerd[2010]: time="2025-09-03T23:23:41.953478858Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:41.959274 containerd[2010]: time="2025-09-03T23:23:41.959196486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:41.961368 containerd[2010]: time="2025-09-03T23:23:41.961176270Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.019205366s" Sep 3 23:23:41.961368 containerd[2010]: time="2025-09-03T23:23:41.961225314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 3 23:23:41.962147 containerd[2010]: time="2025-09-03T23:23:41.962099730Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:23:42.443582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75642376.mount: Deactivated successfully. Sep 3 23:23:42.458375 containerd[2010]: time="2025-09-03T23:23:42.457134641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:42.460969 containerd[2010]: time="2025-09-03T23:23:42.460931189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 3 23:23:42.462988 containerd[2010]: time="2025-09-03T23:23:42.462952517Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:42.467965 containerd[2010]: time="2025-09-03T23:23:42.467917049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:23:42.469303 containerd[2010]: time="2025-09-03T23:23:42.469246445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 507.089067ms" Sep 3 23:23:42.469426 containerd[2010]: time="2025-09-03T23:23:42.469301033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:23:42.470096 containerd[2010]: time="2025-09-03T23:23:42.470041973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 3 23:23:43.021249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548770757.mount: Deactivated successfully. Sep 3 23:23:45.212939 containerd[2010]: time="2025-09-03T23:23:45.212849527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:45.215676 containerd[2010]: time="2025-09-03T23:23:45.215628619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 3 23:23:45.219111 containerd[2010]: time="2025-09-03T23:23:45.219036883Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:45.232408 containerd[2010]: time="2025-09-03T23:23:45.232335091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:45.234673 containerd[2010]: time="2025-09-03T23:23:45.234614383Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.764476422s" Sep 3 23:23:45.234855 containerd[2010]: time="2025-09-03T23:23:45.234825859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 3 23:23:46.020621 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 3 23:23:50.568573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:50.568967 systemd[1]: kubelet.service: Consumed 325ms CPU time, 107M memory peak. Sep 3 23:23:50.572771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:50.625873 systemd[1]: Reload requested from client PID 2817 ('systemctl') (unit session-9.scope)... Sep 3 23:23:50.626109 systemd[1]: Reloading... Sep 3 23:23:50.891960 zram_generator::config[2864]: No configuration found. Sep 3 23:23:51.077337 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:23:51.335956 systemd[1]: Reloading finished in 709 ms. Sep 3 23:23:51.426853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:51.433372 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:23:51.433864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:51.433997 systemd[1]: kubelet.service: Consumed 235ms CPU time, 95M memory peak. Sep 3 23:23:51.437410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:23:51.775441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:23:51.791699 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:23:51.864365 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:51.865923 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:23:51.865923 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:23:51.865923 kubelet[2926]: I0903 23:23:51.865010 2926 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:23:52.805690 kubelet[2926]: I0903 23:23:52.805642 2926 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 3 23:23:52.807928 kubelet[2926]: I0903 23:23:52.805975 2926 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:23:52.807928 kubelet[2926]: I0903 23:23:52.806445 2926 server.go:954] "Client rotation is on, will bootstrap in background" Sep 3 23:23:52.857461 kubelet[2926]: E0903 23:23:52.857396 2926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.220:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:52.867882 kubelet[2926]: I0903 23:23:52.867819 2926 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:23:52.885234 kubelet[2926]: I0903 23:23:52.885200 2926 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:23:52.891375 kubelet[2926]: I0903 23:23:52.891320 2926 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:23:52.893316 kubelet[2926]: I0903 23:23:52.893239 2926 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:23:52.893622 kubelet[2926]: I0903 23:23:52.893304 2926 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:23:52.893791 kubelet[2926]: I0903 23:23:52.893763 2926 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:23:52.893791 kubelet[2926]: I0903 23:23:52.893785 2926 container_manager_linux.go:304] "Creating device plugin manager" Sep 3 23:23:52.894197 kubelet[2926]: I0903 23:23:52.894156 2926 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:52.901815 kubelet[2926]: I0903 23:23:52.901622 2926 kubelet.go:446] "Attempting to sync node with API server" Sep 3 23:23:52.901815 kubelet[2926]: I0903 23:23:52.901669 2926 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:23:52.901815 kubelet[2926]: I0903 23:23:52.901712 2926 kubelet.go:352] "Adding apiserver pod source" Sep 3 23:23:52.901815 kubelet[2926]: I0903 23:23:52.901732 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:23:52.905917 kubelet[2926]: W0903 23:23:52.904798 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-220&limit=500&resourceVersion=0": dial tcp 172.31.24.220:6443: connect: connection refused Sep 3 23:23:52.905917 kubelet[2926]: E0903 23:23:52.904945 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-220&limit=500&resourceVersion=0\": dial tcp 172.31.24.220:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:52.906879 kubelet[2926]: W0903 23:23:52.906819 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.220:6443: connect: connection refused Sep 3 23:23:52.907296 kubelet[2926]: E0903 23:23:52.907240 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.220:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:52.907464 kubelet[2926]: I0903 23:23:52.907426 2926 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:23:52.908517 kubelet[2926]: I0903 23:23:52.908473 2926 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:23:52.908736 kubelet[2926]: W0903 23:23:52.908704 2926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:23:52.911114 kubelet[2926]: I0903 23:23:52.910987 2926 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:23:52.911114 kubelet[2926]: I0903 23:23:52.911053 2926 server.go:1287] "Started kubelet" Sep 3 23:23:52.917477 kubelet[2926]: I0903 23:23:52.917421 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:23:52.927078 kubelet[2926]: I0903 23:23:52.927045 2926 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:23:52.933180 kubelet[2926]: I0903 23:23:52.927407 2926 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:23:52.933180 kubelet[2926]: E0903 23:23:52.928185 2926 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-220\" not found" Sep 3 23:23:52.933180 kubelet[2926]: I0903 23:23:52.928187 2926 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:23:52.934929 kubelet[2926]: I0903 23:23:52.934734 2926 server.go:479] "Adding debug handlers to kubelet server" Sep 3 23:23:52.936320 kubelet[2926]: I0903 23:23:52.936279 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:23:52.938246 kubelet[2926]: I0903 23:23:52.936356 2926 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:23:52.938246 kubelet[2926]: I0903 23:23:52.928260 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:23:52.938246 kubelet[2926]: I0903 23:23:52.937376 2926 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:23:52.938246 kubelet[2926]: E0903 23:23:52.937271 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.220:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-220.1861e949d1af6551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-220,UID:ip-172-31-24-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-220,},FirstTimestamp:2025-09-03 23:23:52.911021393 +0000 UTC m=+1.113165559,LastTimestamp:2025-09-03 23:23:52.911021393 +0000 UTC m=+1.113165559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-220,}" Sep 3 23:23:52.938246 kubelet[2926]: W0903 23:23:52.937869 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.220:6443: connect: connection refused Sep 3 23:23:52.938246 kubelet[2926]: E0903 23:23:52.937982 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.220:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:52.938810 kubelet[2926]: I0903 23:23:52.938334 2926 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:23:52.938810 kubelet[2926]: I0903 23:23:52.938465 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:23:52.941138 kubelet[2926]: I0903 23:23:52.941084 2926 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:23:52.944333 kubelet[2926]: E0903 23:23:52.944273 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-220?timeout=10s\": dial tcp 172.31.24.220:6443: connect: connection refused" interval="200ms" Sep 3 23:23:52.966385 kubelet[2926]: I0903 23:23:52.966328 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:23:52.978262 kubelet[2926]: I0903 23:23:52.978219 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:23:52.978952 kubelet[2926]: I0903 23:23:52.978434 2926 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 3 23:23:52.978952 kubelet[2926]: I0903 23:23:52.978471 2926 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:23:52.978952 kubelet[2926]: I0903 23:23:52.978488 2926 kubelet.go:2382] "Starting kubelet main sync loop" Sep 3 23:23:52.978952 kubelet[2926]: E0903 23:23:52.978552 2926 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:23:52.986342 kubelet[2926]: W0903 23:23:52.986278 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.220:6443: connect: connection refused Sep 3 23:23:52.986548 kubelet[2926]: E0903 23:23:52.986515 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.220:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:52.988776 kubelet[2926]: I0903 23:23:52.988699 2926 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:23:52.988776 kubelet[2926]: I0903 23:23:52.988767 2926 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:23:52.989059 kubelet[2926]: I0903 23:23:52.988801 2926 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:23:52.994641 kubelet[2926]: I0903 23:23:52.994590 2926 policy_none.go:49] "None policy: Start" Sep 3 23:23:52.994641 kubelet[2926]: I0903 23:23:52.994632 2926 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:23:52.994802 kubelet[2926]: I0903 23:23:52.994658 2926 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:23:53.007419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:23:53.033945 kubelet[2926]: E0903 23:23:53.033807 2926 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-220\" not found" Sep 3 23:23:53.037664 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:23:53.045174 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:23:53.060404 kubelet[2926]: I0903 23:23:53.059500 2926 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:23:53.060404 kubelet[2926]: I0903 23:23:53.059804 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:23:53.060404 kubelet[2926]: I0903 23:23:53.059823 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:23:53.061348 kubelet[2926]: I0903 23:23:53.060929 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:23:53.066082 kubelet[2926]: E0903 23:23:53.065994 2926 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:23:53.066082 kubelet[2926]: E0903 23:23:53.066066 2926 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-220\" not found" Sep 3 23:23:53.100756 systemd[1]: Created slice kubepods-burstable-pod97d952a4068d07f3cc98bd382ef456da.slice - libcontainer container kubepods-burstable-pod97d952a4068d07f3cc98bd382ef456da.slice. Sep 3 23:23:53.120588 systemd[1]: Created slice kubepods-burstable-pod431e4632123a6057cea740a6a63119be.slice - libcontainer container kubepods-burstable-pod431e4632123a6057cea740a6a63119be.slice. Sep 3 23:23:53.121640 kubelet[2926]: E0903 23:23:53.121515 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:53.133852 kubelet[2926]: E0903 23:23:53.133796 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:53.138960 kubelet[2926]: I0903 23:23:53.138871 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97d952a4068d07f3cc98bd382ef456da-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-220\" (UID: \"97d952a4068d07f3cc98bd382ef456da\") " pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:23:53.139096 kubelet[2926]: I0903 23:23:53.138997 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:23:53.139096 kubelet[2926]: I0903 23:23:53.139057 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:23:53.139220 kubelet[2926]: I0903 23:23:53.139127 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9708c912ed77f8997407b7c0ffb80019-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-220\" (UID: \"9708c912ed77f8997407b7c0ffb80019\") " pod="kube-system/kube-scheduler-ip-172-31-24-220" Sep 3 23:23:53.139220 kubelet[2926]: I0903 23:23:53.139190 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97d952a4068d07f3cc98bd382ef456da-ca-certs\") pod \"kube-apiserver-ip-172-31-24-220\" (UID: \"97d952a4068d07f3cc98bd382ef456da\") " pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:23:53.139311 kubelet[2926]: I0903 23:23:53.139227 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97d952a4068d07f3cc98bd382ef456da-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-220\" (UID: \"97d952a4068d07f3cc98bd382ef456da\") " pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:23:53.139311 kubelet[2926]: I0903 23:23:53.139292 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:23:53.139421 kubelet[2926]: I0903 23:23:53.139350 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:23:53.139421 kubelet[2926]: I0903 23:23:53.139389 2926 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:23:53.140260 systemd[1]: Created slice kubepods-burstable-pod9708c912ed77f8997407b7c0ffb80019.slice - libcontainer container kubepods-burstable-pod9708c912ed77f8997407b7c0ffb80019.slice. Sep 3 23:23:53.145026 kubelet[2926]: E0903 23:23:53.144868 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:53.145169 kubelet[2926]: E0903 23:23:53.144924 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-220?timeout=10s\": dial tcp 172.31.24.220:6443: connect: connection refused" interval="400ms" Sep 3 23:23:53.163472 kubelet[2926]: I0903 23:23:53.163416 2926 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-220" Sep 3 23:23:53.164313 kubelet[2926]: E0903 23:23:53.164266 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.220:6443/api/v1/nodes\": dial tcp 172.31.24.220:6443: connect: connection refused" node="ip-172-31-24-220" Sep 3 23:23:53.367290 kubelet[2926]: I0903 23:23:53.366923 2926 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-220" Sep 3 23:23:53.367695 kubelet[2926]: E0903 23:23:53.367591 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.220:6443/api/v1/nodes\": dial tcp 172.31.24.220:6443: connect: connection refused" node="ip-172-31-24-220" Sep 3 23:23:53.424712 containerd[2010]: time="2025-09-03T23:23:53.424576839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-220,Uid:97d952a4068d07f3cc98bd382ef456da,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:53.435825 containerd[2010]: time="2025-09-03T23:23:53.435564399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-220,Uid:431e4632123a6057cea740a6a63119be,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:53.447432 containerd[2010]: time="2025-09-03T23:23:53.447381819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-220,Uid:9708c912ed77f8997407b7c0ffb80019,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:53.501687 containerd[2010]: time="2025-09-03T23:23:53.501530476Z" level=info msg="connecting to shim 1440fd5e6def932045b1c9d4f14f08668c6b5df96f5ada02c18dc94506145d1f" address="unix:///run/containerd/s/b5599fa0cdb102abcb77a38400eb2cc7dfa82bf241023084a8eec183d35a3704" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:53.518110 containerd[2010]: time="2025-09-03T23:23:53.518044588Z" level=info msg="connecting to shim faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d" address="unix:///run/containerd/s/ac7b12b8f40b7b4e4306c3531d5b5667c0d9aa00abca50d0e6228682e67d0ecf" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:53.546510 kubelet[2926]: E0903 23:23:53.546430 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-220?timeout=10s\": dial tcp 172.31.24.220:6443: connect: connection refused" interval="800ms" Sep 3 23:23:53.578475 containerd[2010]: time="2025-09-03T23:23:53.578417032Z" level=info msg="connecting to shim 182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18" address="unix:///run/containerd/s/0b05caec46934bf2989b85a05025a842bbe2e7b82d22eeea32d4bc5951c8dc16" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:53.589306 systemd[1]: Started cri-containerd-1440fd5e6def932045b1c9d4f14f08668c6b5df96f5ada02c18dc94506145d1f.scope - libcontainer container 1440fd5e6def932045b1c9d4f14f08668c6b5df96f5ada02c18dc94506145d1f. Sep 3 23:23:53.627248 systemd[1]: Started cri-containerd-faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d.scope - libcontainer container faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d. Sep 3 23:23:53.657207 systemd[1]: Started cri-containerd-182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18.scope - libcontainer container 182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18. Sep 3 23:23:53.748327 containerd[2010]: time="2025-09-03T23:23:53.747457661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-220,Uid:97d952a4068d07f3cc98bd382ef456da,Namespace:kube-system,Attempt:0,} returns sandbox id \"1440fd5e6def932045b1c9d4f14f08668c6b5df96f5ada02c18dc94506145d1f\"" Sep 3 23:23:53.760926 containerd[2010]: time="2025-09-03T23:23:53.759459053Z" level=info msg="CreateContainer within sandbox \"1440fd5e6def932045b1c9d4f14f08668c6b5df96f5ada02c18dc94506145d1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:23:53.773938 kubelet[2926]: I0903 23:23:53.773657 2926 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-220" Sep 3 23:23:53.775539 kubelet[2926]: E0903 23:23:53.775477 2926 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.220:6443/api/v1/nodes\": dial tcp 172.31.24.220:6443: connect: connection refused" node="ip-172-31-24-220" Sep 3 23:23:53.783146 containerd[2010]: time="2025-09-03T23:23:53.783052577Z" level=info msg="Container d552d0e6658e17d45ca1e91777422d8880a5984a3015f43fc9e77de7198cba03: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:53.803826 containerd[2010]: time="2025-09-03T23:23:53.803614253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-220,Uid:431e4632123a6057cea740a6a63119be,Namespace:kube-system,Attempt:0,} returns sandbox id \"faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d\"" Sep 3 23:23:53.808225 containerd[2010]: time="2025-09-03T23:23:53.808160465Z" level=info msg="CreateContainer within sandbox \"faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:23:53.819345 containerd[2010]: time="2025-09-03T23:23:53.819277313Z" level=info msg="CreateContainer within sandbox \"1440fd5e6def932045b1c9d4f14f08668c6b5df96f5ada02c18dc94506145d1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d552d0e6658e17d45ca1e91777422d8880a5984a3015f43fc9e77de7198cba03\"" Sep 3 23:23:53.820746 containerd[2010]: time="2025-09-03T23:23:53.820683797Z" level=info msg="StartContainer for \"d552d0e6658e17d45ca1e91777422d8880a5984a3015f43fc9e77de7198cba03\"" Sep 3 23:23:53.821159 containerd[2010]: time="2025-09-03T23:23:53.821100857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-220,Uid:9708c912ed77f8997407b7c0ffb80019,Namespace:kube-system,Attempt:0,} returns sandbox id \"182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18\"" Sep 3 23:23:53.823367 containerd[2010]: time="2025-09-03T23:23:53.823315973Z" level=info msg="connecting to shim d552d0e6658e17d45ca1e91777422d8880a5984a3015f43fc9e77de7198cba03" address="unix:///run/containerd/s/b5599fa0cdb102abcb77a38400eb2cc7dfa82bf241023084a8eec183d35a3704" protocol=ttrpc version=3 Sep 3 23:23:53.828952 containerd[2010]: time="2025-09-03T23:23:53.828340433Z" level=info msg="CreateContainer within sandbox \"182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:23:53.838851 containerd[2010]: time="2025-09-03T23:23:53.838737365Z" level=info msg="Container 3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:53.856319 containerd[2010]: time="2025-09-03T23:23:53.856267409Z" level=info msg="Container aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:53.862626 systemd[1]: Started cri-containerd-d552d0e6658e17d45ca1e91777422d8880a5984a3015f43fc9e77de7198cba03.scope - libcontainer container d552d0e6658e17d45ca1e91777422d8880a5984a3015f43fc9e77de7198cba03. Sep 3 23:23:53.880427 containerd[2010]: time="2025-09-03T23:23:53.880038126Z" level=info msg="CreateContainer within sandbox \"faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff\"" Sep 3 23:23:53.887664 containerd[2010]: time="2025-09-03T23:23:53.887579466Z" level=info msg="StartContainer for \"3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff\"" Sep 3 23:23:53.894528 containerd[2010]: time="2025-09-03T23:23:53.894451902Z" level=info msg="connecting to shim 3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff" address="unix:///run/containerd/s/ac7b12b8f40b7b4e4306c3531d5b5667c0d9aa00abca50d0e6228682e67d0ecf" protocol=ttrpc version=3 Sep 3 23:23:53.900451 containerd[2010]: time="2025-09-03T23:23:53.900241086Z" level=info msg="CreateContainer within sandbox \"182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b\"" Sep 3 23:23:53.901434 containerd[2010]: time="2025-09-03T23:23:53.901341966Z" level=info msg="StartContainer for \"aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b\"" Sep 3 23:23:53.907921 containerd[2010]: time="2025-09-03T23:23:53.907202286Z" level=info msg="connecting to shim aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b" address="unix:///run/containerd/s/0b05caec46934bf2989b85a05025a842bbe2e7b82d22eeea32d4bc5951c8dc16" protocol=ttrpc version=3 Sep 3 23:23:53.958435 systemd[1]: Started cri-containerd-aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b.scope - libcontainer container aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b. Sep 3 23:23:53.972481 systemd[1]: Started cri-containerd-3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff.scope - libcontainer container 3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff. Sep 3 23:23:54.035933 kubelet[2926]: W0903 23:23:54.035775 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.220:6443: connect: connection refused Sep 3 23:23:54.039154 kubelet[2926]: E0903 23:23:54.038926 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.220:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:54.058549 containerd[2010]: time="2025-09-03T23:23:54.058457690Z" level=info msg="StartContainer for \"d552d0e6658e17d45ca1e91777422d8880a5984a3015f43fc9e77de7198cba03\" returns successfully" Sep 3 23:23:54.144812 containerd[2010]: time="2025-09-03T23:23:54.144139491Z" level=info msg="StartContainer for \"3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff\" returns successfully" Sep 3 23:23:54.182053 containerd[2010]: time="2025-09-03T23:23:54.181974135Z" level=info msg="StartContainer for \"aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b\" returns successfully" Sep 3 23:23:54.206174 kubelet[2926]: E0903 23:23:54.206007 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.220:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-220.1861e949d1af6551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-220,UID:ip-172-31-24-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-220,},FirstTimestamp:2025-09-03 23:23:52.911021393 +0000 UTC m=+1.113165559,LastTimestamp:2025-09-03 23:23:52.911021393 +0000 UTC m=+1.113165559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-220,}" Sep 3 23:23:54.238412 kubelet[2926]: W0903 23:23:54.238298 2926 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.220:6443: connect: connection refused Sep 3 23:23:54.238543 kubelet[2926]: E0903 23:23:54.238411 2926 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.220:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:23:54.578731 kubelet[2926]: I0903 23:23:54.578592 2926 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-220" Sep 3 23:23:55.068191 kubelet[2926]: E0903 23:23:55.068140 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:55.078495 kubelet[2926]: E0903 23:23:55.078447 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:55.083385 kubelet[2926]: E0903 23:23:55.083328 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:56.086471 kubelet[2926]: E0903 23:23:56.086417 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:56.089970 kubelet[2926]: E0903 23:23:56.089883 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:56.090759 kubelet[2926]: E0903 23:23:56.090709 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:57.091304 kubelet[2926]: E0903 23:23:57.091243 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:57.092485 kubelet[2926]: E0903 23:23:57.092424 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:57.907657 kubelet[2926]: E0903 23:23:57.907594 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:58.094809 kubelet[2926]: E0903 23:23:58.094742 2926 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:59.124523 kubelet[2926]: E0903 23:23:59.124453 2926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-220\" not found" node="ip-172-31-24-220" Sep 3 23:23:59.135759 kubelet[2926]: I0903 23:23:59.135705 2926 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-220" Sep 3 23:23:59.135759 kubelet[2926]: E0903 23:23:59.135759 2926 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-220\": node \"ip-172-31-24-220\" not found" Sep 3 23:23:59.229349 kubelet[2926]: I0903 23:23:59.229290 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:23:59.243969 kubelet[2926]: E0903 23:23:59.243102 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-220\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:23:59.244662 kubelet[2926]: I0903 23:23:59.244615 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:23:59.247817 kubelet[2926]: E0903 23:23:59.247761 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-220\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:23:59.247817 kubelet[2926]: I0903 23:23:59.247810 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-220" Sep 3 23:23:59.250789 kubelet[2926]: E0903 23:23:59.250720 2926 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-220\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-220" Sep 3 23:23:59.506415 update_engine[1977]: I20250903 23:23:59.505863 1977 update_attempter.cc:509] Updating boot flags... Sep 3 23:23:59.922159 kubelet[2926]: I0903 23:23:59.921712 2926 apiserver.go:52] "Watching apiserver" Sep 3 23:23:59.934114 kubelet[2926]: I0903 23:23:59.934076 2926 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:24:01.213828 kubelet[2926]: I0903 23:24:01.213480 2926 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:01.777423 systemd[1]: Reload requested from client PID 3468 ('systemctl') (unit session-9.scope)... Sep 3 23:24:01.777446 systemd[1]: Reloading... Sep 3 23:24:01.982065 zram_generator::config[3515]: No configuration found. Sep 3 23:24:02.202123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:24:02.530109 systemd[1]: Reloading finished in 751 ms. Sep 3 23:24:02.569442 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:02.590572 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:24:02.591092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:02.591176 systemd[1]: kubelet.service: Consumed 1.928s CPU time, 129M memory peak. Sep 3 23:24:02.596245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:24:02.969816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:24:02.986584 (kubelet)[3572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:24:03.099066 kubelet[3572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:03.099066 kubelet[3572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:24:03.099066 kubelet[3572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:24:03.099672 kubelet[3572]: I0903 23:24:03.099178 3572 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:24:03.113593 kubelet[3572]: I0903 23:24:03.113529 3572 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 3 23:24:03.113593 kubelet[3572]: I0903 23:24:03.113579 3572 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:24:03.116735 kubelet[3572]: I0903 23:24:03.114077 3572 server.go:954] "Client rotation is on, will bootstrap in background" Sep 3 23:24:03.116735 kubelet[3572]: I0903 23:24:03.116414 3572 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 3 23:24:03.129611 kubelet[3572]: I0903 23:24:03.129410 3572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:24:03.144528 kubelet[3572]: I0903 23:24:03.144494 3572 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:24:03.147589 sudo[3587]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 3 23:24:03.148281 sudo[3587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 3 23:24:03.155311 kubelet[3572]: I0903 23:24:03.155261 3572 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:24:03.155843 kubelet[3572]: I0903 23:24:03.155804 3572 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:24:03.156493 kubelet[3572]: I0903 23:24:03.155963 3572 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:24:03.156752 kubelet[3572]: I0903 23:24:03.156729 3572 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:24:03.156952 kubelet[3572]: I0903 23:24:03.156842 3572 container_manager_linux.go:304] "Creating device plugin manager" Sep 3 23:24:03.157534 kubelet[3572]: I0903 23:24:03.157112 3572 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:03.158525 kubelet[3572]: I0903 23:24:03.158495 3572 kubelet.go:446] "Attempting to sync node with API server" Sep 3 23:24:03.159165 kubelet[3572]: I0903 23:24:03.158977 3572 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:24:03.159165 kubelet[3572]: I0903 23:24:03.159046 3572 kubelet.go:352] "Adding apiserver pod source" Sep 3 23:24:03.159165 kubelet[3572]: I0903 23:24:03.159069 3572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:24:03.162350 kubelet[3572]: I0903 23:24:03.162185 3572 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:24:03.164341 kubelet[3572]: I0903 23:24:03.163247 3572 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:24:03.164341 kubelet[3572]: I0903 23:24:03.164113 3572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:24:03.164341 kubelet[3572]: I0903 23:24:03.164164 3572 server.go:1287] "Started kubelet" Sep 3 23:24:03.169921 kubelet[3572]: I0903 23:24:03.168752 3572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:24:03.188923 kubelet[3572]: I0903 23:24:03.186997 3572 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:24:03.192386 kubelet[3572]: I0903 23:24:03.192284 3572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:24:03.192776 kubelet[3572]: I0903 23:24:03.192730 3572 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:24:03.194614 kubelet[3572]: I0903 23:24:03.193975 3572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:24:03.197139 kubelet[3572]: I0903 23:24:03.196980 3572 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:24:03.197383 kubelet[3572]: E0903 23:24:03.197329 3572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-220\" not found" Sep 3 23:24:03.202932 kubelet[3572]: I0903 23:24:03.201555 3572 server.go:479] "Adding debug handlers to kubelet server" Sep 3 23:24:03.202932 kubelet[3572]: I0903 23:24:03.202324 3572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:24:03.202932 kubelet[3572]: I0903 23:24:03.202538 3572 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:24:03.216738 kubelet[3572]: I0903 23:24:03.215525 3572 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:24:03.218965 kubelet[3572]: I0903 23:24:03.218918 3572 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:24:03.256169 kubelet[3572]: I0903 23:24:03.255544 3572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:24:03.297875 kubelet[3572]: E0903 23:24:03.297640 3572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-220\" not found" Sep 3 23:24:03.310175 kubelet[3572]: I0903 23:24:03.309459 3572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:24:03.310175 kubelet[3572]: I0903 23:24:03.309521 3572 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 3 23:24:03.310175 kubelet[3572]: I0903 23:24:03.309553 3572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:24:03.310175 kubelet[3572]: I0903 23:24:03.309568 3572 kubelet.go:2382] "Starting kubelet main sync loop" Sep 3 23:24:03.310175 kubelet[3572]: E0903 23:24:03.309643 3572 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:24:03.326448 kubelet[3572]: I0903 23:24:03.326394 3572 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:24:03.331870 kubelet[3572]: E0903 23:24:03.331744 3572 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:24:03.409792 kubelet[3572]: E0903 23:24:03.409758 3572 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 3 23:24:03.497753 kubelet[3572]: I0903 23:24:03.497697 3572 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.497924 3572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.497965 3572 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.498249 3572 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.498271 3572 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.498303 3572 policy_none.go:49] "None policy: Start" Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.498321 3572 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.498341 3572 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:24:03.498955 kubelet[3572]: I0903 23:24:03.498520 3572 state_mem.go:75] "Updated machine memory state" Sep 3 23:24:03.516120 kubelet[3572]: I0903 23:24:03.515296 3572 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:24:03.519982 kubelet[3572]: I0903 23:24:03.519356 3572 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:24:03.522475 kubelet[3572]: I0903 23:24:03.520189 3572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:24:03.523325 kubelet[3572]: I0903 23:24:03.522741 3572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:24:03.529276 kubelet[3572]: E0903 23:24:03.529119 3572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:24:03.619078 kubelet[3572]: I0903 23:24:03.618094 3572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:24:03.623848 kubelet[3572]: I0903 23:24:03.619732 3572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:03.624571 kubelet[3572]: I0903 23:24:03.619788 3572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-220" Sep 3 23:24:03.649901 kubelet[3572]: E0903 23:24:03.649825 3572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-220\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:03.695600 kubelet[3572]: I0903 23:24:03.694942 3572 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-220" Sep 3 23:24:03.709787 kubelet[3572]: I0903 23:24:03.709587 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97d952a4068d07f3cc98bd382ef456da-ca-certs\") pod \"kube-apiserver-ip-172-31-24-220\" (UID: \"97d952a4068d07f3cc98bd382ef456da\") " pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:03.712274 kubelet[3572]: I0903 23:24:03.711388 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:24:03.712274 kubelet[3572]: I0903 23:24:03.711466 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:24:03.712274 kubelet[3572]: I0903 23:24:03.711507 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9708c912ed77f8997407b7c0ffb80019-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-220\" (UID: \"9708c912ed77f8997407b7c0ffb80019\") " pod="kube-system/kube-scheduler-ip-172-31-24-220" Sep 3 23:24:03.712274 kubelet[3572]: I0903 23:24:03.711544 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97d952a4068d07f3cc98bd382ef456da-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-220\" (UID: \"97d952a4068d07f3cc98bd382ef456da\") " pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:03.712274 kubelet[3572]: I0903 23:24:03.711581 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97d952a4068d07f3cc98bd382ef456da-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-220\" (UID: \"97d952a4068d07f3cc98bd382ef456da\") " pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:03.712663 kubelet[3572]: I0903 23:24:03.711624 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:24:03.712663 kubelet[3572]: I0903 23:24:03.711659 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:24:03.712663 kubelet[3572]: I0903 23:24:03.711699 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/431e4632123a6057cea740a6a63119be-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-220\" (UID: \"431e4632123a6057cea740a6a63119be\") " pod="kube-system/kube-controller-manager-ip-172-31-24-220" Sep 3 23:24:03.717651 kubelet[3572]: I0903 23:24:03.717017 3572 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-220" Sep 3 23:24:03.717651 kubelet[3572]: I0903 23:24:03.717131 3572 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-220" Sep 3 23:24:04.151673 sudo[3587]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:04.163919 kubelet[3572]: I0903 23:24:04.162068 3572 apiserver.go:52] "Watching apiserver" Sep 3 23:24:04.203111 kubelet[3572]: I0903 23:24:04.203028 3572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:24:04.415135 kubelet[3572]: I0903 23:24:04.414556 3572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:04.429102 kubelet[3572]: E0903 23:24:04.429033 3572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-220\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-220" Sep 3 23:24:04.469451 kubelet[3572]: I0903 23:24:04.469119 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-220" podStartSLOduration=1.4690983260000001 podStartE2EDuration="1.469098326s" podCreationTimestamp="2025-09-03 23:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:04.468326138 +0000 UTC m=+1.470909020" watchObservedRunningTime="2025-09-03 23:24:04.469098326 +0000 UTC m=+1.471681172" Sep 3 23:24:04.506283 kubelet[3572]: I0903 23:24:04.505459 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-220" podStartSLOduration=3.505435238 podStartE2EDuration="3.505435238s" podCreationTimestamp="2025-09-03 23:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:04.48580415 +0000 UTC m=+1.488387032" watchObservedRunningTime="2025-09-03 23:24:04.505435238 +0000 UTC m=+1.508018108" Sep 3 23:24:04.507215 kubelet[3572]: I0903 23:24:04.507017 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-220" podStartSLOduration=1.506997098 podStartE2EDuration="1.506997098s" podCreationTimestamp="2025-09-03 23:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:04.50687273 +0000 UTC m=+1.509455648" watchObservedRunningTime="2025-09-03 23:24:04.506997098 +0000 UTC m=+1.509579968" Sep 3 23:24:06.811828 kubelet[3572]: I0903 23:24:06.811645 3572 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:24:06.813500 kubelet[3572]: I0903 23:24:06.812629 3572 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:24:06.813570 containerd[2010]: time="2025-09-03T23:24:06.812339574Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:24:06.963768 sudo[2356]: pam_unix(sudo:session): session closed for user root Sep 3 23:24:06.987928 sshd[2355]: Connection closed by 139.178.89.65 port 45006 Sep 3 23:24:06.988170 sshd-session[2353]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:06.995591 systemd-logind[1975]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:24:06.998810 systemd[1]: sshd@8-172.31.24.220:22-139.178.89.65:45006.service: Deactivated successfully. Sep 3 23:24:07.006776 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:24:07.007568 systemd[1]: session-9.scope: Consumed 9.149s CPU time, 272.6M memory peak. Sep 3 23:24:07.012564 systemd-logind[1975]: Removed session 9. Sep 3 23:24:07.635877 kubelet[3572]: I0903 23:24:07.635819 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/abdcc04b-261b-4838-aeee-65a21251ec60-kube-proxy\") pod \"kube-proxy-t44wk\" (UID: \"abdcc04b-261b-4838-aeee-65a21251ec60\") " pod="kube-system/kube-proxy-t44wk" Sep 3 23:24:07.640178 kubelet[3572]: I0903 23:24:07.638000 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abdcc04b-261b-4838-aeee-65a21251ec60-xtables-lock\") pod \"kube-proxy-t44wk\" (UID: \"abdcc04b-261b-4838-aeee-65a21251ec60\") " pod="kube-system/kube-proxy-t44wk" Sep 3 23:24:07.640178 kubelet[3572]: I0903 23:24:07.638094 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abdcc04b-261b-4838-aeee-65a21251ec60-lib-modules\") pod \"kube-proxy-t44wk\" (UID: \"abdcc04b-261b-4838-aeee-65a21251ec60\") " pod="kube-system/kube-proxy-t44wk" Sep 3 23:24:07.640178 kubelet[3572]: I0903 23:24:07.638146 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqbr\" (UniqueName: \"kubernetes.io/projected/abdcc04b-261b-4838-aeee-65a21251ec60-kube-api-access-bjqbr\") pod \"kube-proxy-t44wk\" (UID: \"abdcc04b-261b-4838-aeee-65a21251ec60\") " pod="kube-system/kube-proxy-t44wk" Sep 3 23:24:07.656798 systemd[1]: Created slice kubepods-besteffort-podabdcc04b_261b_4838_aeee_65a21251ec60.slice - libcontainer container kubepods-besteffort-podabdcc04b_261b_4838_aeee_65a21251ec60.slice. Sep 3 23:24:07.688271 systemd[1]: Created slice kubepods-burstable-podef0f3dd1_581a_45d0_9060_b33d0e52f0d1.slice - libcontainer container kubepods-burstable-podef0f3dd1_581a_45d0_9060_b33d0e52f0d1.slice. Sep 3 23:24:07.740215 kubelet[3572]: I0903 23:24:07.740104 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxln4\" (UniqueName: \"kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-kube-api-access-gxln4\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.740916 kubelet[3572]: I0903 23:24:07.740448 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-cgroup\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.740916 kubelet[3572]: I0903 23:24:07.740547 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-lib-modules\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.740916 kubelet[3572]: I0903 23:24:07.740614 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-bpf-maps\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.740916 kubelet[3572]: I0903 23:24:07.740653 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cni-path\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.740916 kubelet[3572]: I0903 23:24:07.740688 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-etc-cni-netd\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.740916 kubelet[3572]: I0903 23:24:07.740723 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-clustermesh-secrets\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.741384 kubelet[3572]: I0903 23:24:07.740759 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-kernel\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.741384 kubelet[3572]: I0903 23:24:07.740799 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hubble-tls\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.744942 kubelet[3572]: I0903 23:24:07.740884 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-xtables-lock\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.744942 kubelet[3572]: I0903 23:24:07.743376 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-config-path\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.744942 kubelet[3572]: I0903 23:24:07.743418 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-net\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.744942 kubelet[3572]: I0903 23:24:07.743461 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-run\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.744942 kubelet[3572]: I0903 23:24:07.743496 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hostproc\") pod \"cilium-4z7zx\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " pod="kube-system/cilium-4z7zx" Sep 3 23:24:07.978392 containerd[2010]: time="2025-09-03T23:24:07.978197072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t44wk,Uid:abdcc04b-261b-4838-aeee-65a21251ec60,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:08.007931 containerd[2010]: time="2025-09-03T23:24:08.005117956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4z7zx,Uid:ef0f3dd1-581a-45d0-9060-b33d0e52f0d1,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:08.029119 containerd[2010]: time="2025-09-03T23:24:08.029042644Z" level=info msg="connecting to shim eee2bd151aa8b7f3381c074b3796ae554c5b1923e59e275b4a2727576cf114c4" address="unix:///run/containerd/s/6a647cf7be8b3cbf4572b8c651f76734f0f13bd06730c5abf615d8646f2fff34" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:08.069631 containerd[2010]: time="2025-09-03T23:24:08.069530128Z" level=info msg="connecting to shim 52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e" address="unix:///run/containerd/s/38c60e19104ef6c7c9be249fb6f1cb6a8c6918d8bd72a65266d65ee795adf045" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:08.102673 systemd[1]: Created slice kubepods-besteffort-podcc57e784_a000_4411_8358_c633deb8fbb7.slice - libcontainer container kubepods-besteffort-podcc57e784_a000_4411_8358_c633deb8fbb7.slice. Sep 3 23:24:08.123807 systemd[1]: Started cri-containerd-eee2bd151aa8b7f3381c074b3796ae554c5b1923e59e275b4a2727576cf114c4.scope - libcontainer container eee2bd151aa8b7f3381c074b3796ae554c5b1923e59e275b4a2727576cf114c4. Sep 3 23:24:08.148831 kubelet[3572]: I0903 23:24:08.148602 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtc7g\" (UniqueName: \"kubernetes.io/projected/cc57e784-a000-4411-8358-c633deb8fbb7-kube-api-access-dtc7g\") pod \"cilium-operator-6c4d7847fc-2gs8d\" (UID: \"cc57e784-a000-4411-8358-c633deb8fbb7\") " pod="kube-system/cilium-operator-6c4d7847fc-2gs8d" Sep 3 23:24:08.148831 kubelet[3572]: I0903 23:24:08.148715 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc57e784-a000-4411-8358-c633deb8fbb7-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2gs8d\" (UID: \"cc57e784-a000-4411-8358-c633deb8fbb7\") " pod="kube-system/cilium-operator-6c4d7847fc-2gs8d" Sep 3 23:24:08.176200 systemd[1]: Started cri-containerd-52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e.scope - libcontainer container 52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e. Sep 3 23:24:08.237321 containerd[2010]: time="2025-09-03T23:24:08.236716745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t44wk,Uid:abdcc04b-261b-4838-aeee-65a21251ec60,Namespace:kube-system,Attempt:0,} returns sandbox id \"eee2bd151aa8b7f3381c074b3796ae554c5b1923e59e275b4a2727576cf114c4\"" Sep 3 23:24:08.247700 containerd[2010]: time="2025-09-03T23:24:08.247639157Z" level=info msg="CreateContainer within sandbox \"eee2bd151aa8b7f3381c074b3796ae554c5b1923e59e275b4a2727576cf114c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:24:08.250016 containerd[2010]: time="2025-09-03T23:24:08.249776237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4z7zx,Uid:ef0f3dd1-581a-45d0-9060-b33d0e52f0d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\"" Sep 3 23:24:08.262724 containerd[2010]: time="2025-09-03T23:24:08.262654613Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 3 23:24:08.288538 containerd[2010]: time="2025-09-03T23:24:08.288479165Z" level=info msg="Container 981f7838e65bd99d146145df292565323dc7d83c394a0c81e58e34055fba08ec: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:08.299121 containerd[2010]: time="2025-09-03T23:24:08.299070005Z" level=info msg="CreateContainer within sandbox \"eee2bd151aa8b7f3381c074b3796ae554c5b1923e59e275b4a2727576cf114c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"981f7838e65bd99d146145df292565323dc7d83c394a0c81e58e34055fba08ec\"" Sep 3 23:24:08.300941 containerd[2010]: time="2025-09-03T23:24:08.300485537Z" level=info msg="StartContainer for \"981f7838e65bd99d146145df292565323dc7d83c394a0c81e58e34055fba08ec\"" Sep 3 23:24:08.305259 containerd[2010]: time="2025-09-03T23:24:08.305194493Z" level=info msg="connecting to shim 981f7838e65bd99d146145df292565323dc7d83c394a0c81e58e34055fba08ec" address="unix:///run/containerd/s/6a647cf7be8b3cbf4572b8c651f76734f0f13bd06730c5abf615d8646f2fff34" protocol=ttrpc version=3 Sep 3 23:24:08.340230 systemd[1]: Started cri-containerd-981f7838e65bd99d146145df292565323dc7d83c394a0c81e58e34055fba08ec.scope - libcontainer container 981f7838e65bd99d146145df292565323dc7d83c394a0c81e58e34055fba08ec. Sep 3 23:24:08.420917 containerd[2010]: time="2025-09-03T23:24:08.420776214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2gs8d,Uid:cc57e784-a000-4411-8358-c633deb8fbb7,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:08.428195 containerd[2010]: time="2025-09-03T23:24:08.428135838Z" level=info msg="StartContainer for \"981f7838e65bd99d146145df292565323dc7d83c394a0c81e58e34055fba08ec\" returns successfully" Sep 3 23:24:08.488001 containerd[2010]: time="2025-09-03T23:24:08.487517826Z" level=info msg="connecting to shim 2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560" address="unix:///run/containerd/s/38dbcdbb7920d6ba473c0193d79aa94dfdfc319dee1a4f3c327a5ad6c862f563" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:08.561430 systemd[1]: Started cri-containerd-2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560.scope - libcontainer container 2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560. Sep 3 23:24:08.686760 containerd[2010]: time="2025-09-03T23:24:08.686688595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2gs8d,Uid:cc57e784-a000-4411-8358-c633deb8fbb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\"" Sep 3 23:24:09.472729 kubelet[3572]: I0903 23:24:09.470845 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t44wk" podStartSLOduration=2.470823427 podStartE2EDuration="2.470823427s" podCreationTimestamp="2025-09-03 23:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:09.470368099 +0000 UTC m=+6.472950981" watchObservedRunningTime="2025-09-03 23:24:09.470823427 +0000 UTC m=+6.473406285" Sep 3 23:24:18.906719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078259706.mount: Deactivated successfully. Sep 3 23:24:21.477607 containerd[2010]: time="2025-09-03T23:24:21.476729863Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:21.478942 containerd[2010]: time="2025-09-03T23:24:21.478556935Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 3 23:24:21.481966 containerd[2010]: time="2025-09-03T23:24:21.481432411Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:21.484415 containerd[2010]: time="2025-09-03T23:24:21.484241083Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.221515778s" Sep 3 23:24:21.484415 containerd[2010]: time="2025-09-03T23:24:21.484296283Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 3 23:24:21.489245 containerd[2010]: time="2025-09-03T23:24:21.488440183Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 3 23:24:21.489722 containerd[2010]: time="2025-09-03T23:24:21.489662467Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:24:21.508923 containerd[2010]: time="2025-09-03T23:24:21.508413907Z" level=info msg="Container 2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:21.519747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185193362.mount: Deactivated successfully. Sep 3 23:24:21.531088 containerd[2010]: time="2025-09-03T23:24:21.530998327Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\"" Sep 3 23:24:21.532803 containerd[2010]: time="2025-09-03T23:24:21.532732987Z" level=info msg="StartContainer for \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\"" Sep 3 23:24:21.536393 containerd[2010]: time="2025-09-03T23:24:21.536328031Z" level=info msg="connecting to shim 2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60" address="unix:///run/containerd/s/38c60e19104ef6c7c9be249fb6f1cb6a8c6918d8bd72a65266d65ee795adf045" protocol=ttrpc version=3 Sep 3 23:24:21.579190 systemd[1]: Started cri-containerd-2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60.scope - libcontainer container 2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60. Sep 3 23:24:21.654177 containerd[2010]: time="2025-09-03T23:24:21.653567228Z" level=info msg="StartContainer for \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" returns successfully" Sep 3 23:24:21.677952 containerd[2010]: time="2025-09-03T23:24:21.677862824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" id:\"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" pid:3980 exited_at:{seconds:1756941861 nanos:677159012}" Sep 3 23:24:21.678968 containerd[2010]: time="2025-09-03T23:24:21.678360776Z" level=info msg="received exit event container_id:\"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" id:\"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" pid:3980 exited_at:{seconds:1756941861 nanos:677159012}" Sep 3 23:24:21.680702 systemd[1]: cri-containerd-2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60.scope: Deactivated successfully. Sep 3 23:24:21.727926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60-rootfs.mount: Deactivated successfully. Sep 3 23:24:23.499929 containerd[2010]: time="2025-09-03T23:24:23.499397649Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:24:23.529628 containerd[2010]: time="2025-09-03T23:24:23.529558317Z" level=info msg="Container b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:23.551414 containerd[2010]: time="2025-09-03T23:24:23.550078941Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\"" Sep 3 23:24:23.555019 containerd[2010]: time="2025-09-03T23:24:23.554601081Z" level=info msg="StartContainer for \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\"" Sep 3 23:24:23.562528 containerd[2010]: time="2025-09-03T23:24:23.560957817Z" level=info msg="connecting to shim b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633" address="unix:///run/containerd/s/38c60e19104ef6c7c9be249fb6f1cb6a8c6918d8bd72a65266d65ee795adf045" protocol=ttrpc version=3 Sep 3 23:24:23.621225 systemd[1]: Started cri-containerd-b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633.scope - libcontainer container b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633. Sep 3 23:24:23.684365 containerd[2010]: time="2025-09-03T23:24:23.684300838Z" level=info msg="StartContainer for \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" returns successfully" Sep 3 23:24:23.710392 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:24:23.712798 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:24:23.716147 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:24:23.721318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:24:23.730033 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:24:23.742147 systemd[1]: cri-containerd-b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633.scope: Deactivated successfully. Sep 3 23:24:23.747112 containerd[2010]: time="2025-09-03T23:24:23.747045826Z" level=info msg="received exit event container_id:\"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" id:\"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" pid:4027 exited_at:{seconds:1756941863 nanos:745559050}" Sep 3 23:24:23.747358 containerd[2010]: time="2025-09-03T23:24:23.747266506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" id:\"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" pid:4027 exited_at:{seconds:1756941863 nanos:745559050}" Sep 3 23:24:23.786709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:24:24.507458 containerd[2010]: time="2025-09-03T23:24:24.507172546Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:24:24.533527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633-rootfs.mount: Deactivated successfully. Sep 3 23:24:24.552812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75112429.mount: Deactivated successfully. Sep 3 23:24:24.556925 containerd[2010]: time="2025-09-03T23:24:24.556007074Z" level=info msg="Container 6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:24.575492 containerd[2010]: time="2025-09-03T23:24:24.575305666Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\"" Sep 3 23:24:24.577440 containerd[2010]: time="2025-09-03T23:24:24.576557062Z" level=info msg="StartContainer for \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\"" Sep 3 23:24:24.584603 containerd[2010]: time="2025-09-03T23:24:24.584484574Z" level=info msg="connecting to shim 6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1" address="unix:///run/containerd/s/38c60e19104ef6c7c9be249fb6f1cb6a8c6918d8bd72a65266d65ee795adf045" protocol=ttrpc version=3 Sep 3 23:24:24.663204 systemd[1]: Started cri-containerd-6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1.scope - libcontainer container 6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1. Sep 3 23:24:24.758151 containerd[2010]: time="2025-09-03T23:24:24.757663643Z" level=info msg="StartContainer for \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" returns successfully" Sep 3 23:24:24.763778 systemd[1]: cri-containerd-6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1.scope: Deactivated successfully. Sep 3 23:24:24.771701 containerd[2010]: time="2025-09-03T23:24:24.771497039Z" level=info msg="received exit event container_id:\"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" id:\"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" pid:4077 exited_at:{seconds:1756941864 nanos:770204459}" Sep 3 23:24:24.772207 containerd[2010]: time="2025-09-03T23:24:24.772142363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" id:\"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" pid:4077 exited_at:{seconds:1756941864 nanos:770204459}" Sep 3 23:24:25.512952 containerd[2010]: time="2025-09-03T23:24:25.512359499Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:24:25.533087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756966716.mount: Deactivated successfully. Sep 3 23:24:25.533279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1-rootfs.mount: Deactivated successfully. Sep 3 23:24:25.536942 containerd[2010]: time="2025-09-03T23:24:25.534392159Z" level=info msg="Container e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:25.550963 containerd[2010]: time="2025-09-03T23:24:25.550905995Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\"" Sep 3 23:24:25.556287 containerd[2010]: time="2025-09-03T23:24:25.555855707Z" level=info msg="StartContainer for \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\"" Sep 3 23:24:25.562906 containerd[2010]: time="2025-09-03T23:24:25.562776875Z" level=info msg="connecting to shim e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144" address="unix:///run/containerd/s/38c60e19104ef6c7c9be249fb6f1cb6a8c6918d8bd72a65266d65ee795adf045" protocol=ttrpc version=3 Sep 3 23:24:25.611504 systemd[1]: Started cri-containerd-e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144.scope - libcontainer container e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144. Sep 3 23:24:25.671571 systemd[1]: cri-containerd-e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144.scope: Deactivated successfully. Sep 3 23:24:25.675382 containerd[2010]: time="2025-09-03T23:24:25.674740860Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef0f3dd1_581a_45d0_9060_b33d0e52f0d1.slice/cri-containerd-e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144.scope/memory.events\": no such file or directory" Sep 3 23:24:25.677194 containerd[2010]: time="2025-09-03T23:24:25.676864668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" id:\"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" pid:4125 exited_at:{seconds:1756941865 nanos:674605680}" Sep 3 23:24:25.681934 containerd[2010]: time="2025-09-03T23:24:25.681682920Z" level=info msg="received exit event container_id:\"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" id:\"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" pid:4125 exited_at:{seconds:1756941865 nanos:674605680}" Sep 3 23:24:25.707194 containerd[2010]: time="2025-09-03T23:24:25.707135556Z" level=info msg="StartContainer for \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" returns successfully" Sep 3 23:24:25.747201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144-rootfs.mount: Deactivated successfully. Sep 3 23:24:26.298995 containerd[2010]: time="2025-09-03T23:24:26.298509155Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:26.300463 containerd[2010]: time="2025-09-03T23:24:26.300393251Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 3 23:24:26.304919 containerd[2010]: time="2025-09-03T23:24:26.303156683Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:24:26.311970 containerd[2010]: time="2025-09-03T23:24:26.311832635Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.823287056s" Sep 3 23:24:26.312146 containerd[2010]: time="2025-09-03T23:24:26.311965727Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 3 23:24:26.317072 containerd[2010]: time="2025-09-03T23:24:26.316050023Z" level=info msg="CreateContainer within sandbox \"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 3 23:24:26.332126 containerd[2010]: time="2025-09-03T23:24:26.332074571Z" level=info msg="Container 42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:26.351093 containerd[2010]: time="2025-09-03T23:24:26.351041327Z" level=info msg="CreateContainer within sandbox \"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\"" Sep 3 23:24:26.352658 containerd[2010]: time="2025-09-03T23:24:26.352608851Z" level=info msg="StartContainer for \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\"" Sep 3 23:24:26.354997 containerd[2010]: time="2025-09-03T23:24:26.354849311Z" level=info msg="connecting to shim 42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4" address="unix:///run/containerd/s/38dbcdbb7920d6ba473c0193d79aa94dfdfc319dee1a4f3c327a5ad6c862f563" protocol=ttrpc version=3 Sep 3 23:24:26.388200 systemd[1]: Started cri-containerd-42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4.scope - libcontainer container 42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4. Sep 3 23:24:26.449144 containerd[2010]: time="2025-09-03T23:24:26.449099111Z" level=info msg="StartContainer for \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" returns successfully" Sep 3 23:24:26.542127 containerd[2010]: time="2025-09-03T23:24:26.541583292Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:24:26.602635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007323150.mount: Deactivated successfully. Sep 3 23:24:26.624285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896097991.mount: Deactivated successfully. Sep 3 23:24:26.625154 containerd[2010]: time="2025-09-03T23:24:26.625079232Z" level=info msg="Container 49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:26.651370 containerd[2010]: time="2025-09-03T23:24:26.650768460Z" level=info msg="CreateContainer within sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\"" Sep 3 23:24:26.657012 containerd[2010]: time="2025-09-03T23:24:26.655079004Z" level=info msg="StartContainer for \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\"" Sep 3 23:24:26.657434 containerd[2010]: time="2025-09-03T23:24:26.657386148Z" level=info msg="connecting to shim 49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481" address="unix:///run/containerd/s/38c60e19104ef6c7c9be249fb6f1cb6a8c6918d8bd72a65266d65ee795adf045" protocol=ttrpc version=3 Sep 3 23:24:26.724543 systemd[1]: Started cri-containerd-49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481.scope - libcontainer container 49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481. Sep 3 23:24:26.826007 containerd[2010]: time="2025-09-03T23:24:26.825946645Z" level=info msg="StartContainer for \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" returns successfully" Sep 3 23:24:27.019735 containerd[2010]: time="2025-09-03T23:24:27.019467142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" id:\"25845d572ae45023de09d188ce98fb302515148d1ab68e6499536b4e215d8bf5\" pid:4232 exited_at:{seconds:1756941867 nanos:16867006}" Sep 3 23:24:27.117937 kubelet[3572]: I0903 23:24:27.114967 3572 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 3 23:24:27.184923 kubelet[3572]: I0903 23:24:27.184734 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2gs8d" podStartSLOduration=2.560126695 podStartE2EDuration="20.184713635s" podCreationTimestamp="2025-09-03 23:24:07 +0000 UTC" firstStartedPulling="2025-09-03 23:24:08.688629931 +0000 UTC m=+5.691212789" lastFinishedPulling="2025-09-03 23:24:26.313216871 +0000 UTC m=+23.315799729" observedRunningTime="2025-09-03 23:24:26.672790632 +0000 UTC m=+23.675373502" watchObservedRunningTime="2025-09-03 23:24:27.184713635 +0000 UTC m=+24.187296493" Sep 3 23:24:27.212038 kubelet[3572]: I0903 23:24:27.211813 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmxt\" (UniqueName: \"kubernetes.io/projected/a4fdc53a-6fc8-465f-80a6-61d6a381069d-kube-api-access-cxmxt\") pod \"coredns-668d6bf9bc-dpk45\" (UID: \"a4fdc53a-6fc8-465f-80a6-61d6a381069d\") " pod="kube-system/coredns-668d6bf9bc-dpk45" Sep 3 23:24:27.212038 kubelet[3572]: I0903 23:24:27.211917 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4fdc53a-6fc8-465f-80a6-61d6a381069d-config-volume\") pod \"coredns-668d6bf9bc-dpk45\" (UID: \"a4fdc53a-6fc8-465f-80a6-61d6a381069d\") " pod="kube-system/coredns-668d6bf9bc-dpk45" Sep 3 23:24:27.212583 systemd[1]: Created slice kubepods-burstable-poda4fdc53a_6fc8_465f_80a6_61d6a381069d.slice - libcontainer container kubepods-burstable-poda4fdc53a_6fc8_465f_80a6_61d6a381069d.slice. Sep 3 23:24:27.227831 systemd[1]: Created slice kubepods-burstable-pod6f45342e_cd53_4239_a489_c0938303fe54.slice - libcontainer container kubepods-burstable-pod6f45342e_cd53_4239_a489_c0938303fe54.slice. Sep 3 23:24:27.312777 kubelet[3572]: I0903 23:24:27.312166 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f45342e-cd53-4239-a489-c0938303fe54-config-volume\") pod \"coredns-668d6bf9bc-d4srv\" (UID: \"6f45342e-cd53-4239-a489-c0938303fe54\") " pod="kube-system/coredns-668d6bf9bc-d4srv" Sep 3 23:24:27.314067 kubelet[3572]: I0903 23:24:27.314026 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknfd\" (UniqueName: \"kubernetes.io/projected/6f45342e-cd53-4239-a489-c0938303fe54-kube-api-access-hknfd\") pod \"coredns-668d6bf9bc-d4srv\" (UID: \"6f45342e-cd53-4239-a489-c0938303fe54\") " pod="kube-system/coredns-668d6bf9bc-d4srv" Sep 3 23:24:27.525212 containerd[2010]: time="2025-09-03T23:24:27.525137629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dpk45,Uid:a4fdc53a-6fc8-465f-80a6-61d6a381069d,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:27.545156 containerd[2010]: time="2025-09-03T23:24:27.545084113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4srv,Uid:6f45342e-cd53-4239-a489-c0938303fe54,Namespace:kube-system,Attempt:0,}" Sep 3 23:24:30.983791 (udev-worker)[4295]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:30.987957 systemd-networkd[1821]: cilium_host: Link UP Sep 3 23:24:30.991358 systemd-networkd[1821]: cilium_net: Link UP Sep 3 23:24:30.993532 (udev-worker)[4330]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:30.994370 systemd-networkd[1821]: cilium_net: Gained carrier Sep 3 23:24:30.997107 systemd-networkd[1821]: cilium_host: Gained carrier Sep 3 23:24:31.173677 systemd-networkd[1821]: cilium_vxlan: Link UP Sep 3 23:24:31.174000 systemd-networkd[1821]: cilium_vxlan: Gained carrier Sep 3 23:24:31.390369 systemd-networkd[1821]: cilium_net: Gained IPv6LL Sep 3 23:24:31.622042 systemd-networkd[1821]: cilium_host: Gained IPv6LL Sep 3 23:24:31.737931 kernel: NET: Registered PF_ALG protocol family Sep 3 23:24:32.389491 systemd-networkd[1821]: cilium_vxlan: Gained IPv6LL Sep 3 23:24:33.049035 systemd-networkd[1821]: lxc_health: Link UP Sep 3 23:24:33.049688 systemd-networkd[1821]: lxc_health: Gained carrier Sep 3 23:24:33.686953 kernel: eth0: renamed from tmp355a2 Sep 3 23:24:33.687871 systemd-networkd[1821]: lxc655ff766af3d: Link UP Sep 3 23:24:33.691039 systemd-networkd[1821]: lxc655ff766af3d: Gained carrier Sep 3 23:24:33.705046 systemd-networkd[1821]: lxcf2a96c1e70ad: Link UP Sep 3 23:24:33.711523 (udev-worker)[4334]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:24:33.716304 kernel: eth0: renamed from tmp41689 Sep 3 23:24:33.718722 systemd-networkd[1821]: lxcf2a96c1e70ad: Gained carrier Sep 3 23:24:34.047827 kubelet[3572]: I0903 23:24:34.047267 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4z7zx" podStartSLOduration=13.820910131 podStartE2EDuration="27.047244173s" podCreationTimestamp="2025-09-03 23:24:07 +0000 UTC" firstStartedPulling="2025-09-03 23:24:08.259951829 +0000 UTC m=+5.262534687" lastFinishedPulling="2025-09-03 23:24:21.486285859 +0000 UTC m=+18.488868729" observedRunningTime="2025-09-03 23:24:27.763471922 +0000 UTC m=+24.766054876" watchObservedRunningTime="2025-09-03 23:24:34.047244173 +0000 UTC m=+31.049827019" Sep 3 23:24:34.885251 systemd-networkd[1821]: lxc_health: Gained IPv6LL Sep 3 23:24:35.461694 systemd-networkd[1821]: lxc655ff766af3d: Gained IPv6LL Sep 3 23:24:35.525290 systemd-networkd[1821]: lxcf2a96c1e70ad: Gained IPv6LL Sep 3 23:24:38.245822 ntpd[1969]: Listen normally on 8 cilium_host 192.168.0.179:123 Sep 3 23:24:38.245992 ntpd[1969]: Listen normally on 9 cilium_net [fe80::14d7:17ff:fe71:d897%4]:123 Sep 3 23:24:38.246657 ntpd[1969]: 3 Sep 23:24:38 ntpd[1969]: Listen normally on 8 cilium_host 192.168.0.179:123 Sep 3 23:24:38.246657 ntpd[1969]: 3 Sep 23:24:38 ntpd[1969]: Listen normally on 9 cilium_net [fe80::14d7:17ff:fe71:d897%4]:123 Sep 3 23:24:38.246657 ntpd[1969]: 3 Sep 23:24:38 ntpd[1969]: Listen normally on 10 cilium_host [fe80::b89f:deff:fe2c:5493%5]:123 Sep 3 23:24:38.246657 ntpd[1969]: 3 Sep 23:24:38 ntpd[1969]: Listen normally on 11 cilium_vxlan [fe80::dcbd:6ff:fe0c:6bd1%6]:123 Sep 3 23:24:38.246657 ntpd[1969]: 3 Sep 23:24:38 ntpd[1969]: Listen normally on 12 lxc_health [fe80::207a:4eff:feb0:8fa6%8]:123 Sep 3 23:24:38.246657 ntpd[1969]: 3 Sep 23:24:38 ntpd[1969]: Listen normally on 13 lxc655ff766af3d [fe80::641b:d4ff:fe5e:70cb%10]:123 Sep 3 23:24:38.246071 ntpd[1969]: Listen normally on 10 cilium_host [fe80::b89f:deff:fe2c:5493%5]:123 Sep 3 23:24:38.246136 ntpd[1969]: Listen normally on 11 cilium_vxlan [fe80::dcbd:6ff:fe0c:6bd1%6]:123 Sep 3 23:24:38.248992 ntpd[1969]: 3 Sep 23:24:38 ntpd[1969]: Listen normally on 14 lxcf2a96c1e70ad [fe80::30d9:71ff:fe7d:10c7%12]:123 Sep 3 23:24:38.246211 ntpd[1969]: Listen normally on 12 lxc_health [fe80::207a:4eff:feb0:8fa6%8]:123 Sep 3 23:24:38.246282 ntpd[1969]: Listen normally on 13 lxc655ff766af3d [fe80::641b:d4ff:fe5e:70cb%10]:123 Sep 3 23:24:38.246976 ntpd[1969]: Listen normally on 14 lxcf2a96c1e70ad [fe80::30d9:71ff:fe7d:10c7%12]:123 Sep 3 23:24:42.039717 containerd[2010]: time="2025-09-03T23:24:42.039172201Z" level=info msg="connecting to shim 4168973ef40a89aa5713d40377b42af1fb6c2d5eaad8aa7b6819b1db996305e9" address="unix:///run/containerd/s/5d77321c4199817ccc64cda86cfb6de900d30082be0a7817b6187e823c8cb84b" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:42.117670 systemd[1]: Started cri-containerd-4168973ef40a89aa5713d40377b42af1fb6c2d5eaad8aa7b6819b1db996305e9.scope - libcontainer container 4168973ef40a89aa5713d40377b42af1fb6c2d5eaad8aa7b6819b1db996305e9. Sep 3 23:24:42.135928 containerd[2010]: time="2025-09-03T23:24:42.135133405Z" level=info msg="connecting to shim 355a21561265c1606f771bbc038daa38d37d75053a3ed0846361b0b1b86faf10" address="unix:///run/containerd/s/59374f631ed2d3961012846895a9f4867bf667a9eef5f869a2978089c43f0523" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:24:42.221342 systemd[1]: Started cri-containerd-355a21561265c1606f771bbc038daa38d37d75053a3ed0846361b0b1b86faf10.scope - libcontainer container 355a21561265c1606f771bbc038daa38d37d75053a3ed0846361b0b1b86faf10. Sep 3 23:24:42.328187 containerd[2010]: time="2025-09-03T23:24:42.327160610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dpk45,Uid:a4fdc53a-6fc8-465f-80a6-61d6a381069d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4168973ef40a89aa5713d40377b42af1fb6c2d5eaad8aa7b6819b1db996305e9\"" Sep 3 23:24:42.336704 containerd[2010]: time="2025-09-03T23:24:42.336647534Z" level=info msg="CreateContainer within sandbox \"4168973ef40a89aa5713d40377b42af1fb6c2d5eaad8aa7b6819b1db996305e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:24:42.364050 containerd[2010]: time="2025-09-03T23:24:42.363988670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4srv,Uid:6f45342e-cd53-4239-a489-c0938303fe54,Namespace:kube-system,Attempt:0,} returns sandbox id \"355a21561265c1606f771bbc038daa38d37d75053a3ed0846361b0b1b86faf10\"" Sep 3 23:24:42.364200 containerd[2010]: time="2025-09-03T23:24:42.364005242Z" level=info msg="Container 47966bcdf05d32a7dd0097331c923453b75ed115ad8e146373a54d6a1aadb6d4: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:42.372541 containerd[2010]: time="2025-09-03T23:24:42.372473930Z" level=info msg="CreateContainer within sandbox \"355a21561265c1606f771bbc038daa38d37d75053a3ed0846361b0b1b86faf10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:24:42.381454 containerd[2010]: time="2025-09-03T23:24:42.381299799Z" level=info msg="CreateContainer within sandbox \"4168973ef40a89aa5713d40377b42af1fb6c2d5eaad8aa7b6819b1db996305e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47966bcdf05d32a7dd0097331c923453b75ed115ad8e146373a54d6a1aadb6d4\"" Sep 3 23:24:42.383274 containerd[2010]: time="2025-09-03T23:24:42.382167123Z" level=info msg="StartContainer for \"47966bcdf05d32a7dd0097331c923453b75ed115ad8e146373a54d6a1aadb6d4\"" Sep 3 23:24:42.385269 containerd[2010]: time="2025-09-03T23:24:42.385141059Z" level=info msg="connecting to shim 47966bcdf05d32a7dd0097331c923453b75ed115ad8e146373a54d6a1aadb6d4" address="unix:///run/containerd/s/5d77321c4199817ccc64cda86cfb6de900d30082be0a7817b6187e823c8cb84b" protocol=ttrpc version=3 Sep 3 23:24:42.395023 containerd[2010]: time="2025-09-03T23:24:42.394226115Z" level=info msg="Container 1551d01c0337bd00d47f70a07db305c53d151b326b856189f4f96faa3e6ae041: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:24:42.410071 containerd[2010]: time="2025-09-03T23:24:42.409997091Z" level=info msg="CreateContainer within sandbox \"355a21561265c1606f771bbc038daa38d37d75053a3ed0846361b0b1b86faf10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1551d01c0337bd00d47f70a07db305c53d151b326b856189f4f96faa3e6ae041\"" Sep 3 23:24:42.412957 containerd[2010]: time="2025-09-03T23:24:42.411203667Z" level=info msg="StartContainer for \"1551d01c0337bd00d47f70a07db305c53d151b326b856189f4f96faa3e6ae041\"" Sep 3 23:24:42.416694 containerd[2010]: time="2025-09-03T23:24:42.416616759Z" level=info msg="connecting to shim 1551d01c0337bd00d47f70a07db305c53d151b326b856189f4f96faa3e6ae041" address="unix:///run/containerd/s/59374f631ed2d3961012846895a9f4867bf667a9eef5f869a2978089c43f0523" protocol=ttrpc version=3 Sep 3 23:24:42.433199 systemd[1]: Started cri-containerd-47966bcdf05d32a7dd0097331c923453b75ed115ad8e146373a54d6a1aadb6d4.scope - libcontainer container 47966bcdf05d32a7dd0097331c923453b75ed115ad8e146373a54d6a1aadb6d4. Sep 3 23:24:42.468595 systemd[1]: Started cri-containerd-1551d01c0337bd00d47f70a07db305c53d151b326b856189f4f96faa3e6ae041.scope - libcontainer container 1551d01c0337bd00d47f70a07db305c53d151b326b856189f4f96faa3e6ae041. Sep 3 23:24:42.536920 containerd[2010]: time="2025-09-03T23:24:42.536843607Z" level=info msg="StartContainer for \"47966bcdf05d32a7dd0097331c923453b75ed115ad8e146373a54d6a1aadb6d4\" returns successfully" Sep 3 23:24:42.563496 containerd[2010]: time="2025-09-03T23:24:42.563431671Z" level=info msg="StartContainer for \"1551d01c0337bd00d47f70a07db305c53d151b326b856189f4f96faa3e6ae041\" returns successfully" Sep 3 23:24:42.749334 kubelet[3572]: I0903 23:24:42.749120 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d4srv" podStartSLOduration=35.749071792 podStartE2EDuration="35.749071792s" podCreationTimestamp="2025-09-03 23:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:42.718092364 +0000 UTC m=+39.720675222" watchObservedRunningTime="2025-09-03 23:24:42.749071792 +0000 UTC m=+39.751654746" Sep 3 23:24:43.694809 kubelet[3572]: I0903 23:24:43.693748 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dpk45" podStartSLOduration=36.693722009 podStartE2EDuration="36.693722009s" podCreationTimestamp="2025-09-03 23:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:24:42.748965556 +0000 UTC m=+39.751548438" watchObservedRunningTime="2025-09-03 23:24:43.693722009 +0000 UTC m=+40.696304903" Sep 3 23:24:54.643616 systemd[1]: Started sshd@9-172.31.24.220:22-139.178.89.65:38914.service - OpenSSH per-connection server daemon (139.178.89.65:38914). Sep 3 23:24:54.855560 sshd[4871]: Accepted publickey for core from 139.178.89.65 port 38914 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:24:54.858230 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:24:54.867857 systemd-logind[1975]: New session 10 of user core. Sep 3 23:24:54.874155 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:24:55.188047 sshd[4873]: Connection closed by 139.178.89.65 port 38914 Sep 3 23:24:55.189266 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Sep 3 23:24:55.196165 systemd[1]: sshd@9-172.31.24.220:22-139.178.89.65:38914.service: Deactivated successfully. Sep 3 23:24:55.200375 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:24:55.202196 systemd-logind[1975]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:24:55.205794 systemd-logind[1975]: Removed session 10. Sep 3 23:25:00.224152 systemd[1]: Started sshd@10-172.31.24.220:22-139.178.89.65:46968.service - OpenSSH per-connection server daemon (139.178.89.65:46968). Sep 3 23:25:00.415308 sshd[4886]: Accepted publickey for core from 139.178.89.65 port 46968 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:00.417946 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:00.426450 systemd-logind[1975]: New session 11 of user core. Sep 3 23:25:00.435129 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:25:00.686008 sshd[4888]: Connection closed by 139.178.89.65 port 46968 Sep 3 23:25:00.686857 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:00.694281 systemd-logind[1975]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:25:00.695331 systemd[1]: sshd@10-172.31.24.220:22-139.178.89.65:46968.service: Deactivated successfully. Sep 3 23:25:00.701474 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:25:00.708739 systemd-logind[1975]: Removed session 11. Sep 3 23:25:05.725747 systemd[1]: Started sshd@11-172.31.24.220:22-139.178.89.65:46976.service - OpenSSH per-connection server daemon (139.178.89.65:46976). Sep 3 23:25:05.923209 sshd[4903]: Accepted publickey for core from 139.178.89.65 port 46976 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:05.924827 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:05.935005 systemd-logind[1975]: New session 12 of user core. Sep 3 23:25:05.940279 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:25:06.189264 sshd[4905]: Connection closed by 139.178.89.65 port 46976 Sep 3 23:25:06.190145 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:06.197576 systemd[1]: sshd@11-172.31.24.220:22-139.178.89.65:46976.service: Deactivated successfully. Sep 3 23:25:06.203852 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:25:06.206736 systemd-logind[1975]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:25:06.211458 systemd-logind[1975]: Removed session 12. Sep 3 23:25:11.228849 systemd[1]: Started sshd@12-172.31.24.220:22-139.178.89.65:60850.service - OpenSSH per-connection server daemon (139.178.89.65:60850). Sep 3 23:25:11.426575 sshd[4920]: Accepted publickey for core from 139.178.89.65 port 60850 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:11.429295 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:11.437744 systemd-logind[1975]: New session 13 of user core. Sep 3 23:25:11.448232 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:25:11.690912 sshd[4922]: Connection closed by 139.178.89.65 port 60850 Sep 3 23:25:11.691735 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:11.705797 systemd[1]: sshd@12-172.31.24.220:22-139.178.89.65:60850.service: Deactivated successfully. Sep 3 23:25:11.712299 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:25:11.717872 systemd-logind[1975]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:25:11.740716 systemd[1]: Started sshd@13-172.31.24.220:22-139.178.89.65:60862.service - OpenSSH per-connection server daemon (139.178.89.65:60862). Sep 3 23:25:11.744118 systemd-logind[1975]: Removed session 13. Sep 3 23:25:11.940182 sshd[4935]: Accepted publickey for core from 139.178.89.65 port 60862 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:11.942603 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:11.954705 systemd-logind[1975]: New session 14 of user core. Sep 3 23:25:11.963138 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:25:12.285737 sshd[4937]: Connection closed by 139.178.89.65 port 60862 Sep 3 23:25:12.286467 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:12.298960 systemd-logind[1975]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:25:12.299247 systemd[1]: sshd@13-172.31.24.220:22-139.178.89.65:60862.service: Deactivated successfully. Sep 3 23:25:12.305194 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:25:12.330289 systemd-logind[1975]: Removed session 14. Sep 3 23:25:12.332571 systemd[1]: Started sshd@14-172.31.24.220:22-139.178.89.65:60866.service - OpenSSH per-connection server daemon (139.178.89.65:60866). Sep 3 23:25:12.541950 sshd[4947]: Accepted publickey for core from 139.178.89.65 port 60866 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:12.544009 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:12.553115 systemd-logind[1975]: New session 15 of user core. Sep 3 23:25:12.557188 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:25:12.812265 sshd[4949]: Connection closed by 139.178.89.65 port 60866 Sep 3 23:25:12.813301 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:12.821027 systemd[1]: sshd@14-172.31.24.220:22-139.178.89.65:60866.service: Deactivated successfully. Sep 3 23:25:12.824711 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:25:12.827252 systemd-logind[1975]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:25:12.831273 systemd-logind[1975]: Removed session 15. Sep 3 23:25:17.851202 systemd[1]: Started sshd@15-172.31.24.220:22-139.178.89.65:60868.service - OpenSSH per-connection server daemon (139.178.89.65:60868). Sep 3 23:25:18.055045 sshd[4962]: Accepted publickey for core from 139.178.89.65 port 60868 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:18.057644 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:18.065521 systemd-logind[1975]: New session 16 of user core. Sep 3 23:25:18.080163 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:25:18.322176 sshd[4964]: Connection closed by 139.178.89.65 port 60868 Sep 3 23:25:18.323018 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:18.330773 systemd[1]: sshd@15-172.31.24.220:22-139.178.89.65:60868.service: Deactivated successfully. Sep 3 23:25:18.335953 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:25:18.339205 systemd-logind[1975]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:25:18.342515 systemd-logind[1975]: Removed session 16. Sep 3 23:25:23.360001 systemd[1]: Started sshd@16-172.31.24.220:22-139.178.89.65:38328.service - OpenSSH per-connection server daemon (139.178.89.65:38328). Sep 3 23:25:23.569165 sshd[4977]: Accepted publickey for core from 139.178.89.65 port 38328 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:23.571329 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:23.579314 systemd-logind[1975]: New session 17 of user core. Sep 3 23:25:23.588159 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:25:23.853049 sshd[4979]: Connection closed by 139.178.89.65 port 38328 Sep 3 23:25:23.853850 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:23.861474 systemd-logind[1975]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:25:23.861979 systemd[1]: sshd@16-172.31.24.220:22-139.178.89.65:38328.service: Deactivated successfully. Sep 3 23:25:23.867481 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:25:23.872052 systemd-logind[1975]: Removed session 17. Sep 3 23:25:28.893090 systemd[1]: Started sshd@17-172.31.24.220:22-139.178.89.65:38340.service - OpenSSH per-connection server daemon (139.178.89.65:38340). Sep 3 23:25:29.101495 sshd[4991]: Accepted publickey for core from 139.178.89.65 port 38340 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:29.104858 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:29.113992 systemd-logind[1975]: New session 18 of user core. Sep 3 23:25:29.122217 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:25:29.377525 sshd[4993]: Connection closed by 139.178.89.65 port 38340 Sep 3 23:25:29.378424 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:29.386216 systemd[1]: sshd@17-172.31.24.220:22-139.178.89.65:38340.service: Deactivated successfully. Sep 3 23:25:29.391533 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:25:29.394991 systemd-logind[1975]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:25:29.398145 systemd-logind[1975]: Removed session 18. Sep 3 23:25:34.423267 systemd[1]: Started sshd@18-172.31.24.220:22-139.178.89.65:40318.service - OpenSSH per-connection server daemon (139.178.89.65:40318). Sep 3 23:25:34.628882 sshd[5004]: Accepted publickey for core from 139.178.89.65 port 40318 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:34.631947 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:34.640990 systemd-logind[1975]: New session 19 of user core. Sep 3 23:25:34.646289 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:25:34.900843 sshd[5006]: Connection closed by 139.178.89.65 port 40318 Sep 3 23:25:34.900715 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:34.907509 systemd[1]: sshd@18-172.31.24.220:22-139.178.89.65:40318.service: Deactivated successfully. Sep 3 23:25:34.912355 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:25:34.914456 systemd-logind[1975]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:25:34.918123 systemd-logind[1975]: Removed session 19. Sep 3 23:25:34.937120 systemd[1]: Started sshd@19-172.31.24.220:22-139.178.89.65:40328.service - OpenSSH per-connection server daemon (139.178.89.65:40328). Sep 3 23:25:35.133076 sshd[5018]: Accepted publickey for core from 139.178.89.65 port 40328 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:35.135548 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:35.143728 systemd-logind[1975]: New session 20 of user core. Sep 3 23:25:35.161210 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:25:35.473009 sshd[5020]: Connection closed by 139.178.89.65 port 40328 Sep 3 23:25:35.473796 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:35.479169 systemd[1]: sshd@19-172.31.24.220:22-139.178.89.65:40328.service: Deactivated successfully. Sep 3 23:25:35.483850 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:25:35.489354 systemd-logind[1975]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:25:35.491689 systemd-logind[1975]: Removed session 20. Sep 3 23:25:35.508783 systemd[1]: Started sshd@20-172.31.24.220:22-139.178.89.65:40330.service - OpenSSH per-connection server daemon (139.178.89.65:40330). Sep 3 23:25:35.712730 sshd[5029]: Accepted publickey for core from 139.178.89.65 port 40330 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:35.715695 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:35.723870 systemd-logind[1975]: New session 21 of user core. Sep 3 23:25:35.733174 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 3 23:25:36.778930 sshd[5031]: Connection closed by 139.178.89.65 port 40330 Sep 3 23:25:36.779703 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:36.793412 systemd[1]: sshd@20-172.31.24.220:22-139.178.89.65:40330.service: Deactivated successfully. Sep 3 23:25:36.801215 systemd[1]: session-21.scope: Deactivated successfully. Sep 3 23:25:36.806517 systemd-logind[1975]: Session 21 logged out. Waiting for processes to exit. Sep 3 23:25:36.835351 systemd[1]: Started sshd@21-172.31.24.220:22-139.178.89.65:40340.service - OpenSSH per-connection server daemon (139.178.89.65:40340). Sep 3 23:25:36.838255 systemd-logind[1975]: Removed session 21. Sep 3 23:25:37.030511 sshd[5048]: Accepted publickey for core from 139.178.89.65 port 40340 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:37.033483 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:37.041988 systemd-logind[1975]: New session 22 of user core. Sep 3 23:25:37.051124 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 3 23:25:37.543947 sshd[5050]: Connection closed by 139.178.89.65 port 40340 Sep 3 23:25:37.543552 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:37.550667 systemd[1]: sshd@21-172.31.24.220:22-139.178.89.65:40340.service: Deactivated successfully. Sep 3 23:25:37.555851 systemd[1]: session-22.scope: Deactivated successfully. Sep 3 23:25:37.562767 systemd-logind[1975]: Session 22 logged out. Waiting for processes to exit. Sep 3 23:25:37.579030 systemd[1]: Started sshd@22-172.31.24.220:22-139.178.89.65:40356.service - OpenSSH per-connection server daemon (139.178.89.65:40356). Sep 3 23:25:37.581860 systemd-logind[1975]: Removed session 22. Sep 3 23:25:37.774669 sshd[5059]: Accepted publickey for core from 139.178.89.65 port 40356 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:37.777194 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:37.785739 systemd-logind[1975]: New session 23 of user core. Sep 3 23:25:37.794154 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 3 23:25:38.045524 sshd[5061]: Connection closed by 139.178.89.65 port 40356 Sep 3 23:25:38.046339 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:38.056979 systemd[1]: sshd@22-172.31.24.220:22-139.178.89.65:40356.service: Deactivated successfully. Sep 3 23:25:38.061532 systemd[1]: session-23.scope: Deactivated successfully. Sep 3 23:25:38.064205 systemd-logind[1975]: Session 23 logged out. Waiting for processes to exit. Sep 3 23:25:38.067405 systemd-logind[1975]: Removed session 23. Sep 3 23:25:43.087465 systemd[1]: Started sshd@23-172.31.24.220:22-139.178.89.65:42840.service - OpenSSH per-connection server daemon (139.178.89.65:42840). Sep 3 23:25:43.281642 sshd[5077]: Accepted publickey for core from 139.178.89.65 port 42840 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:43.284156 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:43.292132 systemd-logind[1975]: New session 24 of user core. Sep 3 23:25:43.300135 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 3 23:25:43.535675 sshd[5079]: Connection closed by 139.178.89.65 port 42840 Sep 3 23:25:43.534734 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:43.540295 systemd[1]: sshd@23-172.31.24.220:22-139.178.89.65:42840.service: Deactivated successfully. Sep 3 23:25:43.543461 systemd[1]: session-24.scope: Deactivated successfully. Sep 3 23:25:43.550775 systemd-logind[1975]: Session 24 logged out. Waiting for processes to exit. Sep 3 23:25:43.553797 systemd-logind[1975]: Removed session 24. Sep 3 23:25:48.576348 systemd[1]: Started sshd@24-172.31.24.220:22-139.178.89.65:42844.service - OpenSSH per-connection server daemon (139.178.89.65:42844). Sep 3 23:25:48.767863 sshd[5092]: Accepted publickey for core from 139.178.89.65 port 42844 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:48.772186 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:48.784063 systemd-logind[1975]: New session 25 of user core. Sep 3 23:25:48.796205 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 3 23:25:49.067265 sshd[5094]: Connection closed by 139.178.89.65 port 42844 Sep 3 23:25:49.068152 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:49.077184 systemd[1]: sshd@24-172.31.24.220:22-139.178.89.65:42844.service: Deactivated successfully. Sep 3 23:25:49.084497 systemd[1]: session-25.scope: Deactivated successfully. Sep 3 23:25:49.089345 systemd-logind[1975]: Session 25 logged out. Waiting for processes to exit. Sep 3 23:25:49.092359 systemd-logind[1975]: Removed session 25. Sep 3 23:25:54.110352 systemd[1]: Started sshd@25-172.31.24.220:22-139.178.89.65:35056.service - OpenSSH per-connection server daemon (139.178.89.65:35056). Sep 3 23:25:54.322030 sshd[5106]: Accepted publickey for core from 139.178.89.65 port 35056 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:54.324640 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:54.332597 systemd-logind[1975]: New session 26 of user core. Sep 3 23:25:54.341218 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 3 23:25:54.586116 sshd[5108]: Connection closed by 139.178.89.65 port 35056 Sep 3 23:25:54.585992 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:54.591992 systemd-logind[1975]: Session 26 logged out. Waiting for processes to exit. Sep 3 23:25:54.592748 systemd[1]: sshd@25-172.31.24.220:22-139.178.89.65:35056.service: Deactivated successfully. Sep 3 23:25:54.600291 systemd[1]: session-26.scope: Deactivated successfully. Sep 3 23:25:54.607855 systemd-logind[1975]: Removed session 26. Sep 3 23:25:59.632094 systemd[1]: Started sshd@26-172.31.24.220:22-139.178.89.65:35064.service - OpenSSH per-connection server daemon (139.178.89.65:35064). Sep 3 23:25:59.835059 sshd[5120]: Accepted publickey for core from 139.178.89.65 port 35064 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:25:59.837635 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:59.849111 systemd-logind[1975]: New session 27 of user core. Sep 3 23:25:59.853230 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 3 23:26:00.105699 sshd[5122]: Connection closed by 139.178.89.65 port 35064 Sep 3 23:26:00.106534 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:00.114723 systemd-logind[1975]: Session 27 logged out. Waiting for processes to exit. Sep 3 23:26:00.115759 systemd[1]: sshd@26-172.31.24.220:22-139.178.89.65:35064.service: Deactivated successfully. Sep 3 23:26:00.119830 systemd[1]: session-27.scope: Deactivated successfully. Sep 3 23:26:00.125729 systemd-logind[1975]: Removed session 27. Sep 3 23:26:00.143743 systemd[1]: Started sshd@27-172.31.24.220:22-139.178.89.65:38766.service - OpenSSH per-connection server daemon (139.178.89.65:38766). Sep 3 23:26:00.344117 sshd[5134]: Accepted publickey for core from 139.178.89.65 port 38766 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:00.346744 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:00.355248 systemd-logind[1975]: New session 28 of user core. Sep 3 23:26:00.362170 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 3 23:26:03.054520 containerd[2010]: time="2025-09-03T23:26:03.054239095Z" level=info msg="StopContainer for \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" with timeout 30 (s)" Sep 3 23:26:03.058201 containerd[2010]: time="2025-09-03T23:26:03.058076551Z" level=info msg="Stop container \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" with signal terminated" Sep 3 23:26:03.086262 containerd[2010]: time="2025-09-03T23:26:03.086165251Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:26:03.090723 systemd[1]: cri-containerd-42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4.scope: Deactivated successfully. Sep 3 23:26:03.099469 containerd[2010]: time="2025-09-03T23:26:03.099272335Z" level=info msg="received exit event container_id:\"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" id:\"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" pid:4169 exited_at:{seconds:1756941963 nanos:98429539}" Sep 3 23:26:03.100059 containerd[2010]: time="2025-09-03T23:26:03.099856123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" id:\"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" pid:4169 exited_at:{seconds:1756941963 nanos:98429539}" Sep 3 23:26:03.103262 containerd[2010]: time="2025-09-03T23:26:03.103016383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" id:\"88ea09a75aaa6590b7884947f61c5cbe5d6d1ab9876550733da0d61df88f530a\" pid:5154 exited_at:{seconds:1756941963 nanos:102082315}" Sep 3 23:26:03.111775 containerd[2010]: time="2025-09-03T23:26:03.111515095Z" level=info msg="StopContainer for \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" with timeout 2 (s)" Sep 3 23:26:03.113110 containerd[2010]: time="2025-09-03T23:26:03.113062508Z" level=info msg="Stop container \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" with signal terminated" Sep 3 23:26:03.135604 systemd-networkd[1821]: lxc_health: Link DOWN Sep 3 23:26:03.135623 systemd-networkd[1821]: lxc_health: Lost carrier Sep 3 23:26:03.169788 systemd[1]: cri-containerd-49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481.scope: Deactivated successfully. Sep 3 23:26:03.170432 systemd[1]: cri-containerd-49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481.scope: Consumed 14.173s CPU time, 124.9M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:26:03.177327 containerd[2010]: time="2025-09-03T23:26:03.176995352Z" level=info msg="received exit event container_id:\"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" id:\"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" pid:4202 exited_at:{seconds:1756941963 nanos:175364132}" Sep 3 23:26:03.179980 containerd[2010]: time="2025-09-03T23:26:03.179818844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" id:\"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" pid:4202 exited_at:{seconds:1756941963 nanos:175364132}" Sep 3 23:26:03.191740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4-rootfs.mount: Deactivated successfully. Sep 3 23:26:03.218126 containerd[2010]: time="2025-09-03T23:26:03.218077928Z" level=info msg="StopContainer for \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" returns successfully" Sep 3 23:26:03.221589 containerd[2010]: time="2025-09-03T23:26:03.221268788Z" level=info msg="StopPodSandbox for \"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\"" Sep 3 23:26:03.221589 containerd[2010]: time="2025-09-03T23:26:03.221385656Z" level=info msg="Container to stop \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:03.234652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481-rootfs.mount: Deactivated successfully. Sep 3 23:26:03.245941 systemd[1]: cri-containerd-2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560.scope: Deactivated successfully. Sep 3 23:26:03.250574 containerd[2010]: time="2025-09-03T23:26:03.250400528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\" id:\"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\" pid:3796 exit_status:137 exited_at:{seconds:1756941963 nanos:249552956}" Sep 3 23:26:03.262356 containerd[2010]: time="2025-09-03T23:26:03.261867068Z" level=info msg="StopContainer for \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" returns successfully" Sep 3 23:26:03.262918 containerd[2010]: time="2025-09-03T23:26:03.262848524Z" level=info msg="StopPodSandbox for \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\"" Sep 3 23:26:03.263157 containerd[2010]: time="2025-09-03T23:26:03.263121512Z" level=info msg="Container to stop \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:03.263300 containerd[2010]: time="2025-09-03T23:26:03.263267948Z" level=info msg="Container to stop \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:03.263408 containerd[2010]: time="2025-09-03T23:26:03.263381900Z" level=info msg="Container to stop \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:03.263516 containerd[2010]: time="2025-09-03T23:26:03.263488232Z" level=info msg="Container to stop \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:03.263624 containerd[2010]: time="2025-09-03T23:26:03.263597276Z" level=info msg="Container to stop \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 3 23:26:03.282325 systemd[1]: cri-containerd-52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e.scope: Deactivated successfully. Sep 3 23:26:03.351513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e-rootfs.mount: Deactivated successfully. Sep 3 23:26:03.359735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560-rootfs.mount: Deactivated successfully. Sep 3 23:26:03.362237 containerd[2010]: time="2025-09-03T23:26:03.362160129Z" level=info msg="shim disconnected" id=52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e namespace=k8s.io Sep 3 23:26:03.362402 containerd[2010]: time="2025-09-03T23:26:03.362226789Z" level=warning msg="cleaning up after shim disconnected" id=52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e namespace=k8s.io Sep 3 23:26:03.362402 containerd[2010]: time="2025-09-03T23:26:03.362278389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:26:03.365835 containerd[2010]: time="2025-09-03T23:26:03.365769873Z" level=info msg="shim disconnected" id=2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560 namespace=k8s.io Sep 3 23:26:03.366651 containerd[2010]: time="2025-09-03T23:26:03.365828121Z" level=warning msg="cleaning up after shim disconnected" id=2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560 namespace=k8s.io Sep 3 23:26:03.366651 containerd[2010]: time="2025-09-03T23:26:03.365879565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 3 23:26:03.400177 containerd[2010]: time="2025-09-03T23:26:03.400104957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" id:\"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" pid:3715 exit_status:137 exited_at:{seconds:1756941963 nanos:287766212}" Sep 3 23:26:03.400402 containerd[2010]: time="2025-09-03T23:26:03.400359225Z" level=info msg="received exit event sandbox_id:\"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" exit_status:137 exited_at:{seconds:1756941963 nanos:287766212}" Sep 3 23:26:03.403479 containerd[2010]: time="2025-09-03T23:26:03.401363709Z" level=info msg="TearDown network for sandbox \"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\" successfully" Sep 3 23:26:03.403479 containerd[2010]: time="2025-09-03T23:26:03.401416245Z" level=info msg="StopPodSandbox for \"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\" returns successfully" Sep 3 23:26:03.403479 containerd[2010]: time="2025-09-03T23:26:03.402382785Z" level=info msg="received exit event sandbox_id:\"2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560\" exit_status:137 exited_at:{seconds:1756941963 nanos:249552956}" Sep 3 23:26:03.405949 containerd[2010]: time="2025-09-03T23:26:03.405114177Z" level=info msg="TearDown network for sandbox \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" successfully" Sep 3 23:26:03.405949 containerd[2010]: time="2025-09-03T23:26:03.405174357Z" level=info msg="StopPodSandbox for \"52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e\" returns successfully" Sep 3 23:26:03.408207 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2737af686a6c917791469a5541ed5c37c04a0e8e6badb17f13072fcd004d7560-shm.mount: Deactivated successfully. Sep 3 23:26:03.408391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52d011b4cad7ed654456da596f6707863a0ec712687d3e1eb6569eaebd270b5e-shm.mount: Deactivated successfully. Sep 3 23:26:03.564372 kubelet[3572]: I0903 23:26:03.564286 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-clustermesh-secrets\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.564372 kubelet[3572]: I0903 23:26:03.564353 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc57e784-a000-4411-8358-c633deb8fbb7-cilium-config-path\") pod \"cc57e784-a000-4411-8358-c633deb8fbb7\" (UID: \"cc57e784-a000-4411-8358-c633deb8fbb7\") " Sep 3 23:26:03.567118 kubelet[3572]: I0903 23:26:03.564396 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-run\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567118 kubelet[3572]: I0903 23:26:03.564434 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-kernel\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567118 kubelet[3572]: I0903 23:26:03.564474 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hubble-tls\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567118 kubelet[3572]: I0903 23:26:03.564507 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hostproc\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567118 kubelet[3572]: I0903 23:26:03.564538 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-xtables-lock\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567118 kubelet[3572]: I0903 23:26:03.564573 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtc7g\" (UniqueName: \"kubernetes.io/projected/cc57e784-a000-4411-8358-c633deb8fbb7-kube-api-access-dtc7g\") pod \"cc57e784-a000-4411-8358-c633deb8fbb7\" (UID: \"cc57e784-a000-4411-8358-c633deb8fbb7\") " Sep 3 23:26:03.567438 kubelet[3572]: I0903 23:26:03.564615 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxln4\" (UniqueName: \"kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-kube-api-access-gxln4\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567438 kubelet[3572]: I0903 23:26:03.564678 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-bpf-maps\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567438 kubelet[3572]: I0903 23:26:03.564735 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-config-path\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567438 kubelet[3572]: I0903 23:26:03.564778 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-lib-modules\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567438 kubelet[3572]: I0903 23:26:03.564812 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-etc-cni-netd\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567438 kubelet[3572]: I0903 23:26:03.564850 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-cgroup\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567943 kubelet[3572]: I0903 23:26:03.567040 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hostproc" (OuterVolumeSpecName: "hostproc") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.567943 kubelet[3572]: I0903 23:26:03.564881 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cni-path\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567943 kubelet[3572]: I0903 23:26:03.567831 3572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-net\") pod \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\" (UID: \"ef0f3dd1-581a-45d0-9060-b33d0e52f0d1\") " Sep 3 23:26:03.567943 kubelet[3572]: I0903 23:26:03.567954 3572 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hostproc\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.568398 kubelet[3572]: I0903 23:26:03.568011 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.568398 kubelet[3572]: I0903 23:26:03.568063 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.573972 kubelet[3572]: I0903 23:26:03.573726 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.575086 kubelet[3572]: I0903 23:26:03.573851 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.583593 kubelet[3572]: I0903 23:26:03.581060 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 3 23:26:03.583593 kubelet[3572]: I0903 23:26:03.581446 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.583593 kubelet[3572]: I0903 23:26:03.581494 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.586245 kubelet[3572]: I0903 23:26:03.586180 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.586496 kubelet[3572]: I0903 23:26:03.586453 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.586658 kubelet[3572]: I0903 23:26:03.586632 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cni-path" (OuterVolumeSpecName: "cni-path") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 3 23:26:03.591280 kubelet[3572]: E0903 23:26:03.591169 3572 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:26:03.600159 kubelet[3572]: I0903 23:26:03.600067 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-kube-api-access-gxln4" (OuterVolumeSpecName: "kube-api-access-gxln4") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "kube-api-access-gxln4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:26:03.601416 kubelet[3572]: I0903 23:26:03.601278 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:26:03.621570 kubelet[3572]: I0903 23:26:03.621491 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc57e784-a000-4411-8358-c633deb8fbb7-kube-api-access-dtc7g" (OuterVolumeSpecName: "kube-api-access-dtc7g") pod "cc57e784-a000-4411-8358-c633deb8fbb7" (UID: "cc57e784-a000-4411-8358-c633deb8fbb7"). InnerVolumeSpecName "kube-api-access-dtc7g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:26:03.626011 kubelet[3572]: I0903 23:26:03.625928 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" (UID: "ef0f3dd1-581a-45d0-9060-b33d0e52f0d1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:26:03.629207 kubelet[3572]: I0903 23:26:03.629138 3572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc57e784-a000-4411-8358-c633deb8fbb7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc57e784-a000-4411-8358-c633deb8fbb7" (UID: "cc57e784-a000-4411-8358-c633deb8fbb7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:26:03.668857 kubelet[3572]: I0903 23:26:03.668675 3572 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-config-path\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.669205 kubelet[3572]: I0903 23:26:03.668964 3572 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-lib-modules\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.669205 kubelet[3572]: I0903 23:26:03.668991 3572 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-etc-cni-netd\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.670088 kubelet[3572]: I0903 23:26:03.669762 3572 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-cgroup\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.670088 kubelet[3572]: I0903 23:26:03.669846 3572 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cni-path\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.671135 kubelet[3572]: I0903 23:26:03.669870 3572 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-net\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.671135 kubelet[3572]: I0903 23:26:03.671054 3572 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-clustermesh-secrets\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.671135 kubelet[3572]: I0903 23:26:03.671084 3572 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc57e784-a000-4411-8358-c633deb8fbb7-cilium-config-path\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.671605 kubelet[3572]: I0903 23:26:03.671106 3572 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-cilium-run\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.671605 kubelet[3572]: I0903 23:26:03.671469 3572 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-host-proc-sys-kernel\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.671605 kubelet[3572]: I0903 23:26:03.671523 3572 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-hubble-tls\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.671605 kubelet[3572]: I0903 23:26:03.671552 3572 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-xtables-lock\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.672041 kubelet[3572]: I0903 23:26:03.671575 3572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dtc7g\" (UniqueName: \"kubernetes.io/projected/cc57e784-a000-4411-8358-c633deb8fbb7-kube-api-access-dtc7g\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.672041 kubelet[3572]: I0903 23:26:03.671867 3572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gxln4\" (UniqueName: \"kubernetes.io/projected/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-kube-api-access-gxln4\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.673025 kubelet[3572]: I0903 23:26:03.671992 3572 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1-bpf-maps\") on node \"ip-172-31-24-220\" DevicePath \"\"" Sep 3 23:26:03.918060 kubelet[3572]: I0903 23:26:03.917188 3572 scope.go:117] "RemoveContainer" containerID="42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4" Sep 3 23:26:03.923915 containerd[2010]: time="2025-09-03T23:26:03.923593680Z" level=info msg="RemoveContainer for \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\"" Sep 3 23:26:03.942840 containerd[2010]: time="2025-09-03T23:26:03.942762492Z" level=info msg="RemoveContainer for \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" returns successfully" Sep 3 23:26:03.943019 systemd[1]: Removed slice kubepods-besteffort-podcc57e784_a000_4411_8358_c633deb8fbb7.slice - libcontainer container kubepods-besteffort-podcc57e784_a000_4411_8358_c633deb8fbb7.slice. Sep 3 23:26:03.950637 kubelet[3572]: I0903 23:26:03.950595 3572 scope.go:117] "RemoveContainer" containerID="42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4" Sep 3 23:26:03.953552 containerd[2010]: time="2025-09-03T23:26:03.953494548Z" level=error msg="ContainerStatus for \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\": not found" Sep 3 23:26:03.954434 kubelet[3572]: E0903 23:26:03.954365 3572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\": not found" containerID="42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4" Sep 3 23:26:03.954880 kubelet[3572]: I0903 23:26:03.954616 3572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4"} err="failed to get container status \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\": rpc error: code = NotFound desc = an error occurred when try to find container \"42d1fcf090df7a472fabea086a01eefdb88cb7d535c7cbae5090a7c931e3ced4\": not found" Sep 3 23:26:03.954880 kubelet[3572]: I0903 23:26:03.954743 3572 scope.go:117] "RemoveContainer" containerID="49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481" Sep 3 23:26:03.955689 systemd[1]: Removed slice kubepods-burstable-podef0f3dd1_581a_45d0_9060_b33d0e52f0d1.slice - libcontainer container kubepods-burstable-podef0f3dd1_581a_45d0_9060_b33d0e52f0d1.slice. Sep 3 23:26:03.955982 systemd[1]: kubepods-burstable-podef0f3dd1_581a_45d0_9060_b33d0e52f0d1.slice: Consumed 14.355s CPU time, 125.3M memory peak, 128K read from disk, 12.9M written to disk. Sep 3 23:26:03.962826 containerd[2010]: time="2025-09-03T23:26:03.962299308Z" level=info msg="RemoveContainer for \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\"" Sep 3 23:26:03.980082 containerd[2010]: time="2025-09-03T23:26:03.979644204Z" level=info msg="RemoveContainer for \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" returns successfully" Sep 3 23:26:03.982946 kubelet[3572]: I0903 23:26:03.982512 3572 scope.go:117] "RemoveContainer" containerID="e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144" Sep 3 23:26:03.990479 containerd[2010]: time="2025-09-03T23:26:03.990158076Z" level=info msg="RemoveContainer for \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\"" Sep 3 23:26:04.017168 containerd[2010]: time="2025-09-03T23:26:04.016767008Z" level=info msg="RemoveContainer for \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" returns successfully" Sep 3 23:26:04.020289 kubelet[3572]: I0903 23:26:04.020091 3572 scope.go:117] "RemoveContainer" containerID="6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1" Sep 3 23:26:04.028930 containerd[2010]: time="2025-09-03T23:26:04.028008548Z" level=info msg="RemoveContainer for \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\"" Sep 3 23:26:04.037229 containerd[2010]: time="2025-09-03T23:26:04.037173848Z" level=info msg="RemoveContainer for \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" returns successfully" Sep 3 23:26:04.037767 kubelet[3572]: I0903 23:26:04.037722 3572 scope.go:117] "RemoveContainer" containerID="b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633" Sep 3 23:26:04.040847 containerd[2010]: time="2025-09-03T23:26:04.040803020Z" level=info msg="RemoveContainer for \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\"" Sep 3 23:26:04.048097 containerd[2010]: time="2025-09-03T23:26:04.048004916Z" level=info msg="RemoveContainer for \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" returns successfully" Sep 3 23:26:04.048583 kubelet[3572]: I0903 23:26:04.048327 3572 scope.go:117] "RemoveContainer" containerID="2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60" Sep 3 23:26:04.051009 containerd[2010]: time="2025-09-03T23:26:04.050958128Z" level=info msg="RemoveContainer for \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\"" Sep 3 23:26:04.057678 containerd[2010]: time="2025-09-03T23:26:04.057596660Z" level=info msg="RemoveContainer for \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" returns successfully" Sep 3 23:26:04.058528 kubelet[3572]: I0903 23:26:04.058448 3572 scope.go:117] "RemoveContainer" containerID="49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481" Sep 3 23:26:04.059166 containerd[2010]: time="2025-09-03T23:26:04.059113484Z" level=error msg="ContainerStatus for \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\": not found" Sep 3 23:26:04.059801 kubelet[3572]: E0903 23:26:04.059566 3572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\": not found" containerID="49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481" Sep 3 23:26:04.059801 kubelet[3572]: I0903 23:26:04.059626 3572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481"} err="failed to get container status \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\": rpc error: code = NotFound desc = an error occurred when try to find container \"49be781f2d25dbea5f039bf704bfa2ede7539ed1b66a7a45b3943b93fe8f4481\": not found" Sep 3 23:26:04.059801 kubelet[3572]: I0903 23:26:04.059664 3572 scope.go:117] "RemoveContainer" containerID="e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144" Sep 3 23:26:04.060297 containerd[2010]: time="2025-09-03T23:26:04.060249392Z" level=error msg="ContainerStatus for \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\": not found" Sep 3 23:26:04.060872 kubelet[3572]: E0903 23:26:04.060834 3572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\": not found" containerID="e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144" Sep 3 23:26:04.061211 kubelet[3572]: I0903 23:26:04.061040 3572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144"} err="failed to get container status \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\": rpc error: code = NotFound desc = an error occurred when try to find container \"e12f01f018708c97d09170e22d42ae0c09031869702130fed6e2438007f66144\": not found" Sep 3 23:26:04.061211 kubelet[3572]: I0903 23:26:04.061083 3572 scope.go:117] "RemoveContainer" containerID="6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1" Sep 3 23:26:04.061578 containerd[2010]: time="2025-09-03T23:26:04.061513400Z" level=error msg="ContainerStatus for \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\": not found" Sep 3 23:26:04.062145 kubelet[3572]: E0903 23:26:04.062083 3572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\": not found" containerID="6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1" Sep 3 23:26:04.062300 kubelet[3572]: I0903 23:26:04.062142 3572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1"} err="failed to get container status \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6aa0d8d2a8aefdaaa28de44862c0a0f4bbda6c7689d8db2e8d9d0fb01533def1\": not found" Sep 3 23:26:04.062300 kubelet[3572]: I0903 23:26:04.062180 3572 scope.go:117] "RemoveContainer" containerID="b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633" Sep 3 23:26:04.062697 containerd[2010]: time="2025-09-03T23:26:04.062644160Z" level=error msg="ContainerStatus for \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\": not found" Sep 3 23:26:04.063321 kubelet[3572]: E0903 23:26:04.063285 3572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\": not found" containerID="b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633" Sep 3 23:26:04.063624 kubelet[3572]: I0903 23:26:04.063457 3572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633"} err="failed to get container status \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9e82594dbb67c85b341bf7412d59d7ef2cead984e804c435d1589ec1683b633\": not found" Sep 3 23:26:04.063624 kubelet[3572]: I0903 23:26:04.063498 3572 scope.go:117] "RemoveContainer" containerID="2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60" Sep 3 23:26:04.064196 containerd[2010]: time="2025-09-03T23:26:04.064147040Z" level=error msg="ContainerStatus for \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\": not found" Sep 3 23:26:04.064565 kubelet[3572]: E0903 23:26:04.064533 3572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\": not found" containerID="2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60" Sep 3 23:26:04.064770 kubelet[3572]: I0903 23:26:04.064734 3572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60"} err="failed to get container status \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b691e1cdd131479420e7f384fdeacd93d8aa6188cd640067b481e4e67d75d60\": not found" Sep 3 23:26:04.189399 systemd[1]: var-lib-kubelet-pods-cc57e784\x2da000\x2d4411\x2d8358\x2dc633deb8fbb7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddtc7g.mount: Deactivated successfully. Sep 3 23:26:04.189607 systemd[1]: var-lib-kubelet-pods-ef0f3dd1\x2d581a\x2d45d0\x2d9060\x2db33d0e52f0d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxln4.mount: Deactivated successfully. Sep 3 23:26:04.189738 systemd[1]: var-lib-kubelet-pods-ef0f3dd1\x2d581a\x2d45d0\x2d9060\x2db33d0e52f0d1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 3 23:26:04.189862 systemd[1]: var-lib-kubelet-pods-ef0f3dd1\x2d581a\x2d45d0\x2d9060\x2db33d0e52f0d1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 3 23:26:04.963932 sshd[5136]: Connection closed by 139.178.89.65 port 38766 Sep 3 23:26:04.964233 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:04.972354 systemd-logind[1975]: Session 28 logged out. Waiting for processes to exit. Sep 3 23:26:04.973875 systemd[1]: sshd@27-172.31.24.220:22-139.178.89.65:38766.service: Deactivated successfully. Sep 3 23:26:04.981279 systemd[1]: session-28.scope: Deactivated successfully. Sep 3 23:26:04.981976 systemd[1]: session-28.scope: Consumed 1.895s CPU time, 23.6M memory peak. Sep 3 23:26:04.999711 systemd-logind[1975]: Removed session 28. Sep 3 23:26:05.003240 systemd[1]: Started sshd@28-172.31.24.220:22-139.178.89.65:38774.service - OpenSSH per-connection server daemon (139.178.89.65:38774). Sep 3 23:26:05.202915 sshd[5288]: Accepted publickey for core from 139.178.89.65 port 38774 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:05.205420 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:05.215649 systemd-logind[1975]: New session 29 of user core. Sep 3 23:26:05.218265 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 3 23:26:05.246586 ntpd[1969]: Deleting interface #12 lxc_health, fe80::207a:4eff:feb0:8fa6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=87 secs Sep 3 23:26:05.247100 ntpd[1969]: 3 Sep 23:26:05 ntpd[1969]: Deleting interface #12 lxc_health, fe80::207a:4eff:feb0:8fa6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=87 secs Sep 3 23:26:05.316786 kubelet[3572]: I0903 23:26:05.316732 3572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc57e784-a000-4411-8358-c633deb8fbb7" path="/var/lib/kubelet/pods/cc57e784-a000-4411-8358-c633deb8fbb7/volumes" Sep 3 23:26:05.319264 kubelet[3572]: I0903 23:26:05.319188 3572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" path="/var/lib/kubelet/pods/ef0f3dd1-581a-45d0-9060-b33d0e52f0d1/volumes" Sep 3 23:26:05.696940 kubelet[3572]: I0903 23:26:05.694076 3572 setters.go:602] "Node became not ready" node="ip-172-31-24-220" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-03T23:26:05Z","lastTransitionTime":"2025-09-03T23:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 3 23:26:07.070944 sshd[5290]: Connection closed by 139.178.89.65 port 38774 Sep 3 23:26:07.071769 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:07.081751 systemd-logind[1975]: Session 29 logged out. Waiting for processes to exit. Sep 3 23:26:07.085707 systemd[1]: sshd@28-172.31.24.220:22-139.178.89.65:38774.service: Deactivated successfully. Sep 3 23:26:07.092842 systemd[1]: session-29.scope: Deactivated successfully. Sep 3 23:26:07.094577 systemd[1]: session-29.scope: Consumed 1.608s CPU time, 23.5M memory peak. Sep 3 23:26:07.104725 kubelet[3572]: I0903 23:26:07.102987 3572 memory_manager.go:355] "RemoveStaleState removing state" podUID="ef0f3dd1-581a-45d0-9060-b33d0e52f0d1" containerName="cilium-agent" Sep 3 23:26:07.104725 kubelet[3572]: I0903 23:26:07.103033 3572 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc57e784-a000-4411-8358-c633deb8fbb7" containerName="cilium-operator" Sep 3 23:26:07.128096 systemd-logind[1975]: Removed session 29. Sep 3 23:26:07.136570 systemd[1]: Started sshd@29-172.31.24.220:22-139.178.89.65:38786.service - OpenSSH per-connection server daemon (139.178.89.65:38786). Sep 3 23:26:07.163406 systemd[1]: Created slice kubepods-burstable-podb2242b52_49e2_4303_b2a9_c62bb095e60f.slice - libcontainer container kubepods-burstable-podb2242b52_49e2_4303_b2a9_c62bb095e60f.slice. Sep 3 23:26:07.211760 kubelet[3572]: I0903 23:26:07.209028 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-cilium-run\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.211760 kubelet[3572]: I0903 23:26:07.209101 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2242b52-49e2-4303-b2a9-c62bb095e60f-clustermesh-secrets\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.211760 kubelet[3572]: I0903 23:26:07.209151 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-cni-path\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.211760 kubelet[3572]: I0903 23:26:07.209189 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-bpf-maps\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.211760 kubelet[3572]: I0903 23:26:07.209245 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-host-proc-sys-kernel\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.211760 kubelet[3572]: I0903 23:26:07.209293 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-etc-cni-netd\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212236 kubelet[3572]: I0903 23:26:07.209332 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2242b52-49e2-4303-b2a9-c62bb095e60f-cilium-config-path\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212236 kubelet[3572]: I0903 23:26:07.209375 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-lib-modules\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212236 kubelet[3572]: I0903 23:26:07.209417 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-xtables-lock\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212236 kubelet[3572]: I0903 23:26:07.209454 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhjn\" (UniqueName: \"kubernetes.io/projected/b2242b52-49e2-4303-b2a9-c62bb095e60f-kube-api-access-gqhjn\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212236 kubelet[3572]: I0903 23:26:07.209520 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b2242b52-49e2-4303-b2a9-c62bb095e60f-cilium-ipsec-secrets\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212482 kubelet[3572]: I0903 23:26:07.209555 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-host-proc-sys-net\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212482 kubelet[3572]: I0903 23:26:07.209594 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-hostproc\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212482 kubelet[3572]: I0903 23:26:07.209644 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2242b52-49e2-4303-b2a9-c62bb095e60f-cilium-cgroup\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.212482 kubelet[3572]: I0903 23:26:07.209710 3572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2242b52-49e2-4303-b2a9-c62bb095e60f-hubble-tls\") pod \"cilium-7tz27\" (UID: \"b2242b52-49e2-4303-b2a9-c62bb095e60f\") " pod="kube-system/cilium-7tz27" Sep 3 23:26:07.397461 sshd[5300]: Accepted publickey for core from 139.178.89.65 port 38786 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:07.398821 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:07.408594 systemd-logind[1975]: New session 30 of user core. Sep 3 23:26:07.419205 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 3 23:26:07.477160 containerd[2010]: time="2025-09-03T23:26:07.477096421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7tz27,Uid:b2242b52-49e2-4303-b2a9-c62bb095e60f,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:07.519751 containerd[2010]: time="2025-09-03T23:26:07.519661213Z" level=info msg="connecting to shim 1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e" address="unix:///run/containerd/s/e8a25b0990dc027219fa25e7be1b1a6aa7ca24758e254f86c11bdcf46036c23d" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:07.539870 sshd[5306]: Connection closed by 139.178.89.65 port 38786 Sep 3 23:26:07.541167 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:07.549354 systemd[1]: session-30.scope: Deactivated successfully. Sep 3 23:26:07.551189 systemd[1]: sshd@29-172.31.24.220:22-139.178.89.65:38786.service: Deactivated successfully. Sep 3 23:26:07.560959 systemd-logind[1975]: Session 30 logged out. Waiting for processes to exit. Sep 3 23:26:07.583542 systemd-logind[1975]: Removed session 30. Sep 3 23:26:07.592218 systemd[1]: Started cri-containerd-1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e.scope - libcontainer container 1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e. Sep 3 23:26:07.596862 systemd[1]: Started sshd@30-172.31.24.220:22-139.178.89.65:38796.service - OpenSSH per-connection server daemon (139.178.89.65:38796). Sep 3 23:26:07.687334 containerd[2010]: time="2025-09-03T23:26:07.687149534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7tz27,Uid:b2242b52-49e2-4303-b2a9-c62bb095e60f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\"" Sep 3 23:26:07.698239 containerd[2010]: time="2025-09-03T23:26:07.698078582Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 3 23:26:07.717246 containerd[2010]: time="2025-09-03T23:26:07.717194126Z" level=info msg="Container d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:07.730302 containerd[2010]: time="2025-09-03T23:26:07.730124834Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5\"" Sep 3 23:26:07.731258 containerd[2010]: time="2025-09-03T23:26:07.731151542Z" level=info msg="StartContainer for \"d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5\"" Sep 3 23:26:07.734815 containerd[2010]: time="2025-09-03T23:26:07.734640230Z" level=info msg="connecting to shim d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5" address="unix:///run/containerd/s/e8a25b0990dc027219fa25e7be1b1a6aa7ca24758e254f86c11bdcf46036c23d" protocol=ttrpc version=3 Sep 3 23:26:07.771188 systemd[1]: Started cri-containerd-d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5.scope - libcontainer container d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5. Sep 3 23:26:07.835822 containerd[2010]: time="2025-09-03T23:26:07.835758687Z" level=info msg="StartContainer for \"d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5\" returns successfully" Sep 3 23:26:07.855260 systemd[1]: cri-containerd-d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5.scope: Deactivated successfully. Sep 3 23:26:07.859495 sshd[5347]: Accepted publickey for core from 139.178.89.65 port 38796 ssh2: RSA SHA256:8eQAyPE99YHHVtDm+V4mP5sHyPbVNBHa6xDGC+ww79Y Sep 3 23:26:07.861689 containerd[2010]: time="2025-09-03T23:26:07.861614727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5\" id:\"d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5\" pid:5373 exited_at:{seconds:1756941967 nanos:860387175}" Sep 3 23:26:07.861987 containerd[2010]: time="2025-09-03T23:26:07.861306987Z" level=info msg="received exit event container_id:\"d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5\" id:\"d9ae7d481658b3c4a51e8f06be993ad35173a703fe4f8ead01f43196e50f03c5\" pid:5373 exited_at:{seconds:1756941967 nanos:860387175}" Sep 3 23:26:07.864780 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:07.880496 systemd-logind[1975]: New session 31 of user core. Sep 3 23:26:07.887237 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 3 23:26:07.973747 containerd[2010]: time="2025-09-03T23:26:07.973536676Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 3 23:26:07.993047 containerd[2010]: time="2025-09-03T23:26:07.990103132Z" level=info msg="Container b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:08.007937 containerd[2010]: time="2025-09-03T23:26:08.006612336Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f\"" Sep 3 23:26:08.009604 containerd[2010]: time="2025-09-03T23:26:08.009537132Z" level=info msg="StartContainer for \"b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f\"" Sep 3 23:26:08.017621 containerd[2010]: time="2025-09-03T23:26:08.017546316Z" level=info msg="connecting to shim b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f" address="unix:///run/containerd/s/e8a25b0990dc027219fa25e7be1b1a6aa7ca24758e254f86c11bdcf46036c23d" protocol=ttrpc version=3 Sep 3 23:26:08.072184 systemd[1]: Started cri-containerd-b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f.scope - libcontainer container b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f. Sep 3 23:26:08.178389 containerd[2010]: time="2025-09-03T23:26:08.178305661Z" level=info msg="StartContainer for \"b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f\" returns successfully" Sep 3 23:26:08.207919 systemd[1]: cri-containerd-b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f.scope: Deactivated successfully. Sep 3 23:26:08.209917 containerd[2010]: time="2025-09-03T23:26:08.209436409Z" level=info msg="received exit event container_id:\"b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f\" id:\"b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f\" pid:5421 exited_at:{seconds:1756941968 nanos:209059849}" Sep 3 23:26:08.213576 containerd[2010]: time="2025-09-03T23:26:08.213502813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f\" id:\"b8090d73d131ff7908ea393e6b60e5b1b9e0e97596dbe0221c1257819111c75f\" pid:5421 exited_at:{seconds:1756941968 nanos:209059849}" Sep 3 23:26:08.593268 kubelet[3572]: E0903 23:26:08.593171 3572 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 3 23:26:08.988480 containerd[2010]: time="2025-09-03T23:26:08.988411409Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 3 23:26:09.018952 containerd[2010]: time="2025-09-03T23:26:09.016118281Z" level=info msg="Container eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:09.042105 containerd[2010]: time="2025-09-03T23:26:09.041946889Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e\"" Sep 3 23:26:09.043759 containerd[2010]: time="2025-09-03T23:26:09.043669513Z" level=info msg="StartContainer for \"eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e\"" Sep 3 23:26:09.046909 containerd[2010]: time="2025-09-03T23:26:09.046833013Z" level=info msg="connecting to shim eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e" address="unix:///run/containerd/s/e8a25b0990dc027219fa25e7be1b1a6aa7ca24758e254f86c11bdcf46036c23d" protocol=ttrpc version=3 Sep 3 23:26:09.094209 systemd[1]: Started cri-containerd-eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e.scope - libcontainer container eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e. Sep 3 23:26:09.184073 systemd[1]: cri-containerd-eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e.scope: Deactivated successfully. Sep 3 23:26:09.187747 containerd[2010]: time="2025-09-03T23:26:09.187583882Z" level=info msg="received exit event container_id:\"eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e\" id:\"eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e\" pid:5469 exited_at:{seconds:1756941969 nanos:187099442}" Sep 3 23:26:09.190256 containerd[2010]: time="2025-09-03T23:26:09.190201106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e\" id:\"eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e\" pid:5469 exited_at:{seconds:1756941969 nanos:187099442}" Sep 3 23:26:09.191239 containerd[2010]: time="2025-09-03T23:26:09.191190626Z" level=info msg="StartContainer for \"eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e\" returns successfully" Sep 3 23:26:09.234587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eed359ebe376a810282c35c803330cee8059be173aa94e8500d486d3a06dd09e-rootfs.mount: Deactivated successfully. Sep 3 23:26:09.992294 containerd[2010]: time="2025-09-03T23:26:09.990316902Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 3 23:26:10.013315 containerd[2010]: time="2025-09-03T23:26:10.013249742Z" level=info msg="Container 5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:10.030831 containerd[2010]: time="2025-09-03T23:26:10.030770402Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72\"" Sep 3 23:26:10.033461 containerd[2010]: time="2025-09-03T23:26:10.033414086Z" level=info msg="StartContainer for \"5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72\"" Sep 3 23:26:10.036370 containerd[2010]: time="2025-09-03T23:26:10.036162638Z" level=info msg="connecting to shim 5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72" address="unix:///run/containerd/s/e8a25b0990dc027219fa25e7be1b1a6aa7ca24758e254f86c11bdcf46036c23d" protocol=ttrpc version=3 Sep 3 23:26:10.085211 systemd[1]: Started cri-containerd-5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72.scope - libcontainer container 5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72. Sep 3 23:26:10.142125 systemd[1]: cri-containerd-5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72.scope: Deactivated successfully. Sep 3 23:26:10.145851 containerd[2010]: time="2025-09-03T23:26:10.145797758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72\" id:\"5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72\" pid:5514 exited_at:{seconds:1756941970 nanos:144859646}" Sep 3 23:26:10.148274 containerd[2010]: time="2025-09-03T23:26:10.148057886Z" level=info msg="received exit event container_id:\"5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72\" id:\"5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72\" pid:5514 exited_at:{seconds:1756941970 nanos:144859646}" Sep 3 23:26:10.162658 containerd[2010]: time="2025-09-03T23:26:10.162610527Z" level=info msg="StartContainer for \"5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72\" returns successfully" Sep 3 23:26:10.194471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5aa8717cff56d0da084b218a141998748af5be6c3511786329646faddd8c1a72-rootfs.mount: Deactivated successfully. Sep 3 23:26:11.008920 containerd[2010]: time="2025-09-03T23:26:11.007147827Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 3 23:26:11.031298 containerd[2010]: time="2025-09-03T23:26:11.031235955Z" level=info msg="Container c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:11.045346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670432723.mount: Deactivated successfully. Sep 3 23:26:11.056077 containerd[2010]: time="2025-09-03T23:26:11.055881963Z" level=info msg="CreateContainer within sandbox \"1dcf4e64ce6358b055ab01cb176d86da5117a53d47e5f82141c942745bd6326e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\"" Sep 3 23:26:11.057497 containerd[2010]: time="2025-09-03T23:26:11.057440175Z" level=info msg="StartContainer for \"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\"" Sep 3 23:26:11.061168 containerd[2010]: time="2025-09-03T23:26:11.060985587Z" level=info msg="connecting to shim c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02" address="unix:///run/containerd/s/e8a25b0990dc027219fa25e7be1b1a6aa7ca24758e254f86c11bdcf46036c23d" protocol=ttrpc version=3 Sep 3 23:26:11.108205 systemd[1]: Started cri-containerd-c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02.scope - libcontainer container c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02. Sep 3 23:26:11.193717 containerd[2010]: time="2025-09-03T23:26:11.193656844Z" level=info msg="StartContainer for \"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" returns successfully" Sep 3 23:26:11.327574 containerd[2010]: time="2025-09-03T23:26:11.327387232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" id:\"0e6e0fec284ec92c4cc71174e60c3c98738dd184b984d0efb4eed6236b7b574b\" pid:5580 exited_at:{seconds:1756941971 nanos:326418652}" Sep 3 23:26:12.062933 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 3 23:26:12.501029 containerd[2010]: time="2025-09-03T23:26:12.500880246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" id:\"8fa2dd3effee2d4c6f92aa3a1182f55fb0a2ae6e6d123b10383af5bb7dba302a\" pid:5657 exit_status:1 exited_at:{seconds:1756941972 nanos:500139342}" Sep 3 23:26:14.764598 containerd[2010]: time="2025-09-03T23:26:14.764327313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" id:\"3e91ca050f1b13c8351bf2736031059ae2ec03690042f25e098180e5df0d4778\" pid:5764 exit_status:1 exited_at:{seconds:1756941974 nanos:763172721}" Sep 3 23:26:16.590487 systemd-networkd[1821]: lxc_health: Link UP Sep 3 23:26:16.601432 (udev-worker)[6085]: Network interface NamePolicy= disabled on kernel command line. Sep 3 23:26:16.614064 systemd-networkd[1821]: lxc_health: Gained carrier Sep 3 23:26:17.068338 containerd[2010]: time="2025-09-03T23:26:17.068197425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" id:\"d6fac0b67912df274b6915f28c335d2422b3ab36f89aad612ad698e203d8541b\" pid:6116 exited_at:{seconds:1756941977 nanos:67086285}" Sep 3 23:26:17.543951 kubelet[3572]: I0903 23:26:17.542826 3572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7tz27" podStartSLOduration=10.542805455 podStartE2EDuration="10.542805455s" podCreationTimestamp="2025-09-03 23:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:12.118309636 +0000 UTC m=+129.120892530" watchObservedRunningTime="2025-09-03 23:26:17.542805455 +0000 UTC m=+134.545388325" Sep 3 23:26:17.670733 systemd-networkd[1821]: lxc_health: Gained IPv6LL Sep 3 23:26:19.343333 containerd[2010]: time="2025-09-03T23:26:19.343263852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" id:\"d505b65ebb1f1365ae5b8ccf048e245e85fc11f7e4f08340a89f8b782b4b37f2\" pid:6144 exited_at:{seconds:1756941979 nanos:341872608}" Sep 3 23:26:20.245925 ntpd[1969]: Listen normally on 15 lxc_health [fe80::b027:92ff:fee2:4448%14]:123 Sep 3 23:26:20.246510 ntpd[1969]: 3 Sep 23:26:20 ntpd[1969]: Listen normally on 15 lxc_health [fe80::b027:92ff:fee2:4448%14]:123 Sep 3 23:26:21.590721 containerd[2010]: time="2025-09-03T23:26:21.590196519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" id:\"db646aec4b2f77fa8b473d51dcd8741e4bab74b699c810af387002560f824a89\" pid:6171 exited_at:{seconds:1756941981 nanos:589361307}" Sep 3 23:26:23.868851 containerd[2010]: time="2025-09-03T23:26:23.868788103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6a784f574e513023afe838d0e09bb2e2d145762666056edb548a9e696527e02\" id:\"bb0641863ac50dd6971e3a7c51d4286ec907a59f179759b3ba5ded759f077113\" pid:6196 exited_at:{seconds:1756941983 nanos:868376215}" Sep 3 23:26:23.905908 sshd[5398]: Connection closed by 139.178.89.65 port 38796 Sep 3 23:26:23.908228 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:23.916220 systemd[1]: sshd@30-172.31.24.220:22-139.178.89.65:38796.service: Deactivated successfully. Sep 3 23:26:23.923776 systemd[1]: session-31.scope: Deactivated successfully. Sep 3 23:26:23.931715 systemd-logind[1975]: Session 31 logged out. Waiting for processes to exit. Sep 3 23:26:23.935712 systemd-logind[1975]: Removed session 31. Sep 3 23:26:37.665063 systemd[1]: cri-containerd-3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff.scope: Deactivated successfully. Sep 3 23:26:37.665661 systemd[1]: cri-containerd-3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff.scope: Consumed 4.750s CPU time, 54.3M memory peak. Sep 3 23:26:37.669648 containerd[2010]: time="2025-09-03T23:26:37.668194243Z" level=info msg="received exit event container_id:\"3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff\" id:\"3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff\" pid:3148 exit_status:1 exited_at:{seconds:1756941997 nanos:664600555}" Sep 3 23:26:37.669648 containerd[2010]: time="2025-09-03T23:26:37.668382463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff\" id:\"3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff\" pid:3148 exit_status:1 exited_at:{seconds:1756941997 nanos:664600555}" Sep 3 23:26:37.715548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff-rootfs.mount: Deactivated successfully. Sep 3 23:26:38.102808 kubelet[3572]: I0903 23:26:38.102364 3572 scope.go:117] "RemoveContainer" containerID="3a7b96bc7c02fd1535a7f8cc866788c905c2eccbce9ed6779189997a22e843ff" Sep 3 23:26:38.108179 containerd[2010]: time="2025-09-03T23:26:38.108085901Z" level=info msg="CreateContainer within sandbox \"faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 3 23:26:38.129931 containerd[2010]: time="2025-09-03T23:26:38.127197725Z" level=info msg="Container 7bb181a105ea837522ccc3c4f986ce9ba338b08e67ee303d361fa85ce09bf38d: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:38.148526 containerd[2010]: time="2025-09-03T23:26:38.148447602Z" level=info msg="CreateContainer within sandbox \"faa16eb544f596b69cd9bd01dae9cced4dccbee0feedda8852f5cc11e5e33f0d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7bb181a105ea837522ccc3c4f986ce9ba338b08e67ee303d361fa85ce09bf38d\"" Sep 3 23:26:38.149554 containerd[2010]: time="2025-09-03T23:26:38.149451522Z" level=info msg="StartContainer for \"7bb181a105ea837522ccc3c4f986ce9ba338b08e67ee303d361fa85ce09bf38d\"" Sep 3 23:26:38.151774 containerd[2010]: time="2025-09-03T23:26:38.151699770Z" level=info msg="connecting to shim 7bb181a105ea837522ccc3c4f986ce9ba338b08e67ee303d361fa85ce09bf38d" address="unix:///run/containerd/s/ac7b12b8f40b7b4e4306c3531d5b5667c0d9aa00abca50d0e6228682e67d0ecf" protocol=ttrpc version=3 Sep 3 23:26:38.201217 systemd[1]: Started cri-containerd-7bb181a105ea837522ccc3c4f986ce9ba338b08e67ee303d361fa85ce09bf38d.scope - libcontainer container 7bb181a105ea837522ccc3c4f986ce9ba338b08e67ee303d361fa85ce09bf38d. Sep 3 23:26:38.296310 containerd[2010]: time="2025-09-03T23:26:38.296238654Z" level=info msg="StartContainer for \"7bb181a105ea837522ccc3c4f986ce9ba338b08e67ee303d361fa85ce09bf38d\" returns successfully" Sep 3 23:26:43.410235 systemd[1]: cri-containerd-aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b.scope: Deactivated successfully. Sep 3 23:26:43.410769 systemd[1]: cri-containerd-aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b.scope: Consumed 4.311s CPU time, 20.9M memory peak. Sep 3 23:26:43.417616 containerd[2010]: time="2025-09-03T23:26:43.417551220Z" level=info msg="received exit event container_id:\"aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b\" id:\"aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b\" pid:3141 exit_status:1 exited_at:{seconds:1756942003 nanos:417105348}" Sep 3 23:26:43.418380 containerd[2010]: time="2025-09-03T23:26:43.418020744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b\" id:\"aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b\" pid:3141 exit_status:1 exited_at:{seconds:1756942003 nanos:417105348}" Sep 3 23:26:43.461127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b-rootfs.mount: Deactivated successfully. Sep 3 23:26:44.127914 kubelet[3572]: I0903 23:26:44.127275 3572 scope.go:117] "RemoveContainer" containerID="aaf93d613375e01a2833143fb31d4721bc43f89cf7f4530f4030a2601781814b" Sep 3 23:26:44.131255 containerd[2010]: time="2025-09-03T23:26:44.131182979Z" level=info msg="CreateContainer within sandbox \"182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 3 23:26:44.151824 containerd[2010]: time="2025-09-03T23:26:44.151167047Z" level=info msg="Container a0cc34f6a9a8505f29fa7215afe8c1a128bba997ff4c7ef89c7038a62e5572a2: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:44.173106 containerd[2010]: time="2025-09-03T23:26:44.173005943Z" level=info msg="CreateContainer within sandbox \"182c765267c1268b0106aef8a380e87dcdb20128861e50a66ce783d8f913cf18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a0cc34f6a9a8505f29fa7215afe8c1a128bba997ff4c7ef89c7038a62e5572a2\"" Sep 3 23:26:44.175610 containerd[2010]: time="2025-09-03T23:26:44.174010115Z" level=info msg="StartContainer for \"a0cc34f6a9a8505f29fa7215afe8c1a128bba997ff4c7ef89c7038a62e5572a2\"" Sep 3 23:26:44.176375 containerd[2010]: time="2025-09-03T23:26:44.176307491Z" level=info msg="connecting to shim a0cc34f6a9a8505f29fa7215afe8c1a128bba997ff4c7ef89c7038a62e5572a2" address="unix:///run/containerd/s/0b05caec46934bf2989b85a05025a842bbe2e7b82d22eeea32d4bc5951c8dc16" protocol=ttrpc version=3 Sep 3 23:26:44.217173 systemd[1]: Started cri-containerd-a0cc34f6a9a8505f29fa7215afe8c1a128bba997ff4c7ef89c7038a62e5572a2.scope - libcontainer container a0cc34f6a9a8505f29fa7215afe8c1a128bba997ff4c7ef89c7038a62e5572a2. Sep 3 23:26:44.302275 containerd[2010]: time="2025-09-03T23:26:44.302205288Z" level=info msg="StartContainer for \"a0cc34f6a9a8505f29fa7215afe8c1a128bba997ff4c7ef89c7038a62e5572a2\" returns successfully" Sep 3 23:26:45.694749 kubelet[3572]: E0903 23:26:45.694684 3572 request.go:1332] Unexpected error when reading response body: context deadline exceeded Sep 3 23:26:45.697251 kubelet[3572]: E0903 23:26:45.694781 3572 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: context deadline exceeded" Sep 3 23:26:55.696147 kubelet[3572]: E0903 23:26:55.695947 3572 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-220?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"