Sep 4 23:44:53.230529 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 23:44:53.230575 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:44:53.230600 kernel: KASLR disabled due to lack of seed Sep 4 23:44:53.230617 kernel: efi: EFI v2.7 by EDK II Sep 4 23:44:53.230633 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 4 23:44:53.230649 kernel: secureboot: Secure boot disabled Sep 4 23:44:53.230667 kernel: ACPI: Early table checksum verification disabled Sep 4 23:44:53.230682 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 23:44:53.230698 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 23:44:53.230713 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 23:44:53.230734 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 23:44:53.230750 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 23:44:53.230766 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 23:44:53.230782 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 23:44:53.230800 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 23:44:53.230821 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 23:44:53.230838 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 23:44:53.230855 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 23:44:53.230871 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 23:44:53.230888 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 23:44:53.230904 kernel: printk: bootconsole [uart0] enabled Sep 4 23:44:53.230920 kernel: NUMA: Failed to initialise from firmware Sep 4 23:44:53.230937 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 23:44:53.230954 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 23:44:53.230971 kernel: Zone ranges: Sep 4 23:44:53.230987 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 23:44:53.231007 kernel: DMA32 empty Sep 4 23:44:53.231024 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 23:44:53.231040 kernel: Movable zone start for each node Sep 4 23:44:53.231057 kernel: Early memory node ranges Sep 4 23:44:53.233152 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 23:44:53.233175 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 23:44:53.233192 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 23:44:53.233209 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 23:44:53.233225 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 23:44:53.233242 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 23:44:53.233258 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 23:44:53.233274 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 23:44:53.233302 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 23:44:53.233319 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 23:44:53.233343 kernel: psci: probing for conduit method from ACPI. Sep 4 23:44:53.233360 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 23:44:53.233378 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:44:53.233399 kernel: psci: Trusted OS migration not required Sep 4 23:44:53.233416 kernel: psci: SMC Calling Convention v1.1 Sep 4 23:44:53.233433 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 4 23:44:53.233451 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:44:53.233468 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:44:53.233486 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:44:53.233503 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:44:53.233520 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:44:53.233537 kernel: CPU features: detected: Spectre-v2 Sep 4 23:44:53.233554 kernel: CPU features: detected: Spectre-v3a Sep 4 23:44:53.233571 kernel: CPU features: detected: Spectre-BHB Sep 4 23:44:53.233593 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 23:44:53.233610 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 23:44:53.233627 kernel: alternatives: applying boot alternatives Sep 4 23:44:53.233647 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:53.233667 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:44:53.233684 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:44:53.233702 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:44:53.233719 kernel: Fallback order for Node 0: 0 Sep 4 23:44:53.233736 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 23:44:53.233753 kernel: Policy zone: Normal Sep 4 23:44:53.233770 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:44:53.233792 kernel: software IO TLB: area num 2. Sep 4 23:44:53.233809 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 23:44:53.233827 kernel: Memory: 3821112K/4030464K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 209352K reserved, 0K cma-reserved) Sep 4 23:44:53.233845 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:44:53.233862 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:44:53.233881 kernel: rcu: RCU event tracing is enabled. Sep 4 23:44:53.233898 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:44:53.233916 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:44:53.233933 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:44:53.233950 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:44:53.233967 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:44:53.233989 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:44:53.234006 kernel: GICv3: 96 SPIs implemented Sep 4 23:44:53.234023 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:44:53.234040 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:44:53.234057 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 23:44:53.234113 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 23:44:53.234131 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 23:44:53.234149 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 23:44:53.234167 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 4 23:44:53.234184 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 4 23:44:53.234201 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 23:44:53.234218 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 4 23:44:53.234242 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:44:53.234260 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 23:44:53.234277 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 23:44:53.234295 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 23:44:53.234312 kernel: Console: colour dummy device 80x25 Sep 4 23:44:53.234330 kernel: printk: console [tty1] enabled Sep 4 23:44:53.234348 kernel: ACPI: Core revision 20230628 Sep 4 23:44:53.234366 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 23:44:53.234384 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:44:53.234402 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:44:53.234425 kernel: landlock: Up and running. Sep 4 23:44:53.234442 kernel: SELinux: Initializing. Sep 4 23:44:53.234460 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:53.234478 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:53.234495 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:53.234513 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:53.234531 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:44:53.234548 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:44:53.234566 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 23:44:53.234588 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 23:44:53.234606 kernel: Remapping and enabling EFI services. Sep 4 23:44:53.234623 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:44:53.234640 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:44:53.234658 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 23:44:53.234676 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 4 23:44:53.234693 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 23:44:53.234711 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:44:53.234729 kernel: SMP: Total of 2 processors activated. Sep 4 23:44:53.234751 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:44:53.234769 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 23:44:53.234798 kernel: CPU features: detected: CRC32 instructions Sep 4 23:44:53.234820 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:44:53.234839 kernel: alternatives: applying system-wide alternatives Sep 4 23:44:53.234857 kernel: devtmpfs: initialized Sep 4 23:44:53.234875 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:44:53.234893 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:44:53.234912 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:44:53.234935 kernel: SMBIOS 3.0.0 present. Sep 4 23:44:53.234953 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 23:44:53.234972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:44:53.234990 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:44:53.235009 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:44:53.235028 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:44:53.235046 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:44:53.235087 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Sep 4 23:44:53.235109 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:44:53.235127 kernel: cpuidle: using governor menu Sep 4 23:44:53.235159 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:44:53.235184 kernel: ASID allocator initialised with 65536 entries Sep 4 23:44:53.235203 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:44:53.235222 kernel: Serial: AMBA PL011 UART driver Sep 4 23:44:53.235244 kernel: Modules: 17728 pages in range for non-PLT usage Sep 4 23:44:53.235264 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:44:53.235290 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:44:53.235309 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:44:53.235327 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:44:53.235346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:44:53.235364 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:44:53.235382 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:44:53.235401 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:44:53.235419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:44:53.235437 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:44:53.235460 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:44:53.235479 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:44:53.235497 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:44:53.235515 kernel: ACPI: Interpreter enabled Sep 4 23:44:53.235533 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:44:53.235551 kernel: ACPI: MCFG table detected, 1 entries Sep 4 23:44:53.235571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 23:44:53.235898 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:44:53.238248 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 23:44:53.238531 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 23:44:53.238736 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 23:44:53.238936 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 23:44:53.238962 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 23:44:53.238981 kernel: acpiphp: Slot [1] registered Sep 4 23:44:53.238999 kernel: acpiphp: Slot [2] registered Sep 4 23:44:53.239017 kernel: acpiphp: Slot [3] registered Sep 4 23:44:53.239035 kernel: acpiphp: Slot [4] registered Sep 4 23:44:53.240914 kernel: acpiphp: Slot [5] registered Sep 4 23:44:53.240953 kernel: acpiphp: Slot [6] registered Sep 4 23:44:53.240972 kernel: acpiphp: Slot [7] registered Sep 4 23:44:53.240991 kernel: acpiphp: Slot [8] registered Sep 4 23:44:53.241009 kernel: acpiphp: Slot [9] registered Sep 4 23:44:53.241027 kernel: acpiphp: Slot [10] registered Sep 4 23:44:53.241045 kernel: acpiphp: Slot [11] registered Sep 4 23:44:53.241084 kernel: acpiphp: Slot [12] registered Sep 4 23:44:53.241106 kernel: acpiphp: Slot [13] registered Sep 4 23:44:53.241133 kernel: acpiphp: Slot [14] registered Sep 4 23:44:53.241151 kernel: acpiphp: Slot [15] registered Sep 4 23:44:53.241170 kernel: acpiphp: Slot [16] registered Sep 4 23:44:53.241188 kernel: acpiphp: Slot [17] registered Sep 4 23:44:53.241206 kernel: acpiphp: Slot [18] registered Sep 4 23:44:53.241224 kernel: acpiphp: Slot [19] registered Sep 4 23:44:53.241243 kernel: acpiphp: Slot [20] registered Sep 4 23:44:53.241261 kernel: acpiphp: Slot [21] registered Sep 4 23:44:53.241279 kernel: acpiphp: Slot [22] registered Sep 4 23:44:53.241297 kernel: acpiphp: Slot [23] registered Sep 4 23:44:53.241319 kernel: acpiphp: Slot [24] registered Sep 4 23:44:53.241338 kernel: acpiphp: Slot [25] registered Sep 4 23:44:53.241356 kernel: acpiphp: Slot [26] registered Sep 4 23:44:53.241375 kernel: acpiphp: Slot [27] registered Sep 4 23:44:53.241393 kernel: acpiphp: Slot [28] registered Sep 4 23:44:53.241411 kernel: acpiphp: Slot [29] registered Sep 4 23:44:53.241429 kernel: acpiphp: Slot [30] registered Sep 4 23:44:53.241447 kernel: acpiphp: Slot [31] registered Sep 4 23:44:53.241465 kernel: PCI host bridge to bus 0000:00 Sep 4 23:44:53.241714 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 23:44:53.241902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 23:44:53.242145 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 23:44:53.242336 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 23:44:53.242571 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 23:44:53.242804 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 23:44:53.243026 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 23:44:53.243344 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 23:44:53.243559 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 23:44:53.243766 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 23:44:53.243983 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 23:44:53.244263 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 23:44:53.244473 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 23:44:53.244763 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 23:44:53.244979 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 23:44:53.245215 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 23:44:53.245423 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 23:44:53.245631 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 23:44:53.245836 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 23:44:53.246051 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 23:44:53.246273 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 23:44:53.246458 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 23:44:53.246642 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 23:44:53.246667 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 23:44:53.246686 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 23:44:53.246706 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 23:44:53.246724 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 23:44:53.246743 kernel: iommu: Default domain type: Translated Sep 4 23:44:53.246767 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:44:53.246786 kernel: efivars: Registered efivars operations Sep 4 23:44:53.246804 kernel: vgaarb: loaded Sep 4 23:44:53.246822 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:44:53.246840 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:44:53.246859 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:44:53.246877 kernel: pnp: PnP ACPI init Sep 4 23:44:53.247235 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 23:44:53.247271 kernel: pnp: PnP ACPI: found 1 devices Sep 4 23:44:53.247291 kernel: NET: Registered PF_INET protocol family Sep 4 23:44:53.247310 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:44:53.247329 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:44:53.247347 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:44:53.247366 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:44:53.247384 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:44:53.247403 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:44:53.247421 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:53.247444 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:53.247463 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:44:53.247481 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:44:53.247499 kernel: kvm [1]: HYP mode not available Sep 4 23:44:53.247518 kernel: Initialise system trusted keyrings Sep 4 23:44:53.247537 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:44:53.247555 kernel: Key type asymmetric registered Sep 4 23:44:53.247573 kernel: Asymmetric key parser 'x509' registered Sep 4 23:44:53.247591 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:44:53.247614 kernel: io scheduler mq-deadline registered Sep 4 23:44:53.247632 kernel: io scheduler kyber registered Sep 4 23:44:53.247651 kernel: io scheduler bfq registered Sep 4 23:44:53.247868 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 23:44:53.247895 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 23:44:53.247914 kernel: ACPI: button: Power Button [PWRB] Sep 4 23:44:53.247932 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 23:44:53.247950 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 23:44:53.247969 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:44:53.247994 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 23:44:53.248242 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 23:44:53.248269 kernel: printk: console [ttyS0] disabled Sep 4 23:44:53.248288 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 23:44:53.248306 kernel: printk: console [ttyS0] enabled Sep 4 23:44:53.248324 kernel: printk: bootconsole [uart0] disabled Sep 4 23:44:53.248342 kernel: thunder_xcv, ver 1.0 Sep 4 23:44:53.248360 kernel: thunder_bgx, ver 1.0 Sep 4 23:44:53.248378 kernel: nicpf, ver 1.0 Sep 4 23:44:53.248403 kernel: nicvf, ver 1.0 Sep 4 23:44:53.248606 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:44:53.248794 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:44:52 UTC (1757029492) Sep 4 23:44:53.248820 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:44:53.248838 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 23:44:53.248857 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:44:53.248875 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:44:53.248894 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:44:53.248918 kernel: Segment Routing with IPv6 Sep 4 23:44:53.248936 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:44:53.248954 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:44:53.248972 kernel: Key type dns_resolver registered Sep 4 23:44:53.248990 kernel: registered taskstats version 1 Sep 4 23:44:53.249008 kernel: Loading compiled-in X.509 certificates Sep 4 23:44:53.249027 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:44:53.249045 kernel: Key type .fscrypt registered Sep 4 23:44:53.249165 kernel: Key type fscrypt-provisioning registered Sep 4 23:44:53.249198 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:44:53.249217 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:44:53.249235 kernel: ima: No architecture policies found Sep 4 23:44:53.249253 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:44:53.249271 kernel: clk: Disabling unused clocks Sep 4 23:44:53.249289 kernel: Freeing unused kernel memory: 38400K Sep 4 23:44:53.249307 kernel: Run /init as init process Sep 4 23:44:53.249325 kernel: with arguments: Sep 4 23:44:53.249344 kernel: /init Sep 4 23:44:53.249366 kernel: with environment: Sep 4 23:44:53.249384 kernel: HOME=/ Sep 4 23:44:53.249402 kernel: TERM=linux Sep 4 23:44:53.249420 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:44:53.249440 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:44:53.249466 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:53.249487 systemd[1]: Detected virtualization amazon. Sep 4 23:44:53.249512 systemd[1]: Detected architecture arm64. Sep 4 23:44:53.249532 systemd[1]: Running in initrd. Sep 4 23:44:53.249551 systemd[1]: No hostname configured, using default hostname. Sep 4 23:44:53.249572 systemd[1]: Hostname set to . Sep 4 23:44:53.249592 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:44:53.249612 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:44:53.249632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:53.249652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:53.249673 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:44:53.249698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:53.249719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:44:53.249740 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:44:53.249762 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:44:53.249783 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:44:53.249803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:53.249828 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:53.249848 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:53.249868 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:53.249888 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:53.249908 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:53.249928 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:53.249948 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:53.249969 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:44:53.249989 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:44:53.250013 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:53.250034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:53.250054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:53.250102 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:53.250125 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:44:53.250146 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:53.250166 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:44:53.250186 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:44:53.250207 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:53.250233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:53.250253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:53.250273 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:53.250294 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:53.250315 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:44:53.250378 systemd-journald[250]: Collecting audit messages is disabled. Sep 4 23:44:53.250422 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:44:53.250444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:53.250470 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:44:53.250490 systemd-journald[250]: Journal started Sep 4 23:44:53.250527 systemd-journald[250]: Runtime Journal (/run/log/journal/ec26565d59003201a1a6aebf0cf213b7) is 8M, max 75.3M, 67.3M free. Sep 4 23:44:53.228962 systemd-modules-load[252]: Inserted module 'overlay' Sep 4 23:44:53.262084 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:53.272129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:44:53.275371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:53.282103 kernel: Bridge firewalling registered Sep 4 23:44:53.279862 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 4 23:44:53.284683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:53.291697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:53.292763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:53.302513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:53.337491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:53.353727 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:53.362026 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:53.375515 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:44:53.379897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:53.403393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:53.429120 dracut-cmdline[287]: dracut-dracut-053 Sep 4 23:44:53.441315 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:53.490730 systemd-resolved[289]: Positive Trust Anchors: Sep 4 23:44:53.490758 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:53.490817 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:53.587115 kernel: SCSI subsystem initialized Sep 4 23:44:53.597102 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:44:53.608109 kernel: iscsi: registered transport (tcp) Sep 4 23:44:53.629871 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:44:53.629957 kernel: QLogic iSCSI HBA Driver Sep 4 23:44:53.719107 kernel: random: crng init done Sep 4 23:44:53.719465 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 4 23:44:53.724265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:53.731467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:53.755120 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:53.767348 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:44:53.804180 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:44:53.804268 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:44:53.804296 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:44:53.869126 kernel: raid6: neonx8 gen() 6541 MB/s Sep 4 23:44:53.886098 kernel: raid6: neonx4 gen() 6496 MB/s Sep 4 23:44:53.903097 kernel: raid6: neonx2 gen() 5412 MB/s Sep 4 23:44:53.920097 kernel: raid6: neonx1 gen() 3940 MB/s Sep 4 23:44:53.937096 kernel: raid6: int64x8 gen() 3602 MB/s Sep 4 23:44:53.955098 kernel: raid6: int64x4 gen() 3695 MB/s Sep 4 23:44:53.973099 kernel: raid6: int64x2 gen() 3593 MB/s Sep 4 23:44:53.991338 kernel: raid6: int64x1 gen() 2764 MB/s Sep 4 23:44:53.991372 kernel: raid6: using algorithm neonx8 gen() 6541 MB/s Sep 4 23:44:54.010342 kernel: raid6: .... xor() 4814 MB/s, rmw enabled Sep 4 23:44:54.010400 kernel: raid6: using neon recovery algorithm Sep 4 23:44:54.018951 kernel: xor: measuring software checksum speed Sep 4 23:44:54.019013 kernel: 8regs : 12939 MB/sec Sep 4 23:44:54.020097 kernel: 32regs : 11521 MB/sec Sep 4 23:44:54.022267 kernel: arm64_neon : 8985 MB/sec Sep 4 23:44:54.022302 kernel: xor: using function: 8regs (12939 MB/sec) Sep 4 23:44:54.105129 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:44:54.124250 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:54.136377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:54.185214 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 4 23:44:54.195264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:54.219406 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:44:54.247481 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Sep 4 23:44:54.305424 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:54.322337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:54.436994 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:54.462093 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:44:54.502608 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:54.509740 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:54.513462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:54.522184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:54.536408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:44:54.569191 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:54.674356 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 23:44:54.674435 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 23:44:54.681202 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 23:44:54.682499 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 23:44:54.681985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:54.684370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:54.693450 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:54.704734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:54.705045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:54.731228 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:38:34:f4:88:fb Sep 4 23:44:54.713089 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:54.730336 (udev-worker)[532]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:44:54.741521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:54.757633 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 23:44:54.765239 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 23:44:54.781116 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 23:44:54.781137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:54.797479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:54.808517 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:44:54.808584 kernel: GPT:9289727 != 16777215 Sep 4 23:44:54.808610 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:44:54.809390 kernel: GPT:9289727 != 16777215 Sep 4 23:44:54.810487 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:44:54.812111 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:44:54.829896 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:54.914099 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (538) Sep 4 23:44:54.927103 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (518) Sep 4 23:44:55.025956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 23:44:55.070609 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 23:44:55.092411 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 23:44:55.098553 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 23:44:55.123346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:44:55.137953 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:44:55.150555 disk-uuid[665]: Primary Header is updated. Sep 4 23:44:55.150555 disk-uuid[665]: Secondary Entries is updated. Sep 4 23:44:55.150555 disk-uuid[665]: Secondary Header is updated. Sep 4 23:44:55.166184 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:44:56.182105 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:44:56.182176 disk-uuid[667]: The operation has completed successfully. Sep 4 23:44:56.376228 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:44:56.378242 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:44:56.466383 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:44:56.492566 sh[927]: Success Sep 4 23:44:56.513105 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:44:56.616628 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:44:56.627727 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:44:56.635773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:44:56.675734 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:44:56.675807 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:56.675846 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:44:56.679116 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:44:56.679172 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:44:56.792101 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 23:44:56.812620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:44:56.817670 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:44:56.837477 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:44:56.844172 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:44:56.888195 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:56.888270 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:56.889643 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:44:56.906103 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:44:56.915157 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:56.923562 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:44:56.937436 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:44:57.057475 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:57.072433 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:57.138894 systemd-networkd[1117]: lo: Link UP Sep 4 23:44:57.138912 systemd-networkd[1117]: lo: Gained carrier Sep 4 23:44:57.142723 systemd-networkd[1117]: Enumeration completed Sep 4 23:44:57.142917 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:57.143886 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:57.143895 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:57.155769 systemd[1]: Reached target network.target - Network. Sep 4 23:44:57.156406 systemd-networkd[1117]: eth0: Link UP Sep 4 23:44:57.156415 systemd-networkd[1117]: eth0: Gained carrier Sep 4 23:44:57.156433 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:57.192202 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.17.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:44:57.350356 ignition[1033]: Ignition 2.20.0 Sep 4 23:44:57.350387 ignition[1033]: Stage: fetch-offline Sep 4 23:44:57.355415 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:57.355449 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:57.356002 ignition[1033]: Ignition finished successfully Sep 4 23:44:57.368768 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:57.383396 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:44:57.422587 ignition[1127]: Ignition 2.20.0 Sep 4 23:44:57.422609 ignition[1127]: Stage: fetch Sep 4 23:44:57.423262 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:57.423289 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:57.423583 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:57.439118 ignition[1127]: PUT result: OK Sep 4 23:44:57.442386 ignition[1127]: parsed url from cmdline: "" Sep 4 23:44:57.442412 ignition[1127]: no config URL provided Sep 4 23:44:57.442431 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:57.442459 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:57.442492 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:57.452753 ignition[1127]: PUT result: OK Sep 4 23:44:57.453024 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 23:44:57.463190 ignition[1127]: GET result: OK Sep 4 23:44:57.465180 ignition[1127]: parsing config with SHA512: 4e67b69a2fdf3592107df40bf7df70041009c87bade9025bb723c8f472e85085db09dd24e8c211877b0d8505ad30adab6b0a7d515068ceb8b299d0ee1599cb79 Sep 4 23:44:57.475456 unknown[1127]: fetched base config from "system" Sep 4 23:44:57.475478 unknown[1127]: fetched base config from "system" Sep 4 23:44:57.475509 unknown[1127]: fetched user config from "aws" Sep 4 23:44:57.480298 ignition[1127]: fetch: fetch complete Sep 4 23:44:57.483825 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:44:57.480312 ignition[1127]: fetch: fetch passed Sep 4 23:44:57.480412 ignition[1127]: Ignition finished successfully Sep 4 23:44:57.497093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:44:57.535944 ignition[1134]: Ignition 2.20.0 Sep 4 23:44:57.535979 ignition[1134]: Stage: kargs Sep 4 23:44:57.536688 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:57.536716 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:57.536895 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:57.545025 ignition[1134]: PUT result: OK Sep 4 23:44:57.557977 ignition[1134]: kargs: kargs passed Sep 4 23:44:57.558333 ignition[1134]: Ignition finished successfully Sep 4 23:44:57.564009 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:44:57.575532 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:44:57.602372 ignition[1140]: Ignition 2.20.0 Sep 4 23:44:57.602400 ignition[1140]: Stage: disks Sep 4 23:44:57.604488 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:57.604519 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:57.605218 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:57.614613 ignition[1140]: PUT result: OK Sep 4 23:44:57.620262 ignition[1140]: disks: disks passed Sep 4 23:44:57.620559 ignition[1140]: Ignition finished successfully Sep 4 23:44:57.625885 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:44:57.626498 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:57.636992 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:44:57.637227 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:57.637287 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:57.637341 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:57.661449 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:44:57.712943 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:44:57.721503 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:44:57.739355 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:44:57.844127 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:44:57.845829 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:44:57.852173 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:57.878275 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:57.889119 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:44:57.893378 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:44:57.893466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:44:57.894138 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:57.918231 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1167) Sep 4 23:44:57.918296 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:57.923796 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:57.923864 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:44:57.936604 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:44:57.946516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:44:57.955409 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:44:57.961686 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:58.392762 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:44:58.434871 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:44:58.446791 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:44:58.456252 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:44:58.786524 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:58.798238 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:44:58.812635 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:44:58.830987 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:44:58.836241 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:58.872230 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:44:58.879470 ignition[1279]: INFO : Ignition 2.20.0 Sep 4 23:44:58.879470 ignition[1279]: INFO : Stage: mount Sep 4 23:44:58.879470 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:58.879470 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:58.879470 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:58.898104 ignition[1279]: INFO : PUT result: OK Sep 4 23:44:58.898104 ignition[1279]: INFO : mount: mount passed Sep 4 23:44:58.898104 ignition[1279]: INFO : Ignition finished successfully Sep 4 23:44:58.894394 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:44:58.914055 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:44:58.954045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:58.975238 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1292) Sep 4 23:44:58.980941 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:58.981021 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:58.981051 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:44:58.986090 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:44:58.990200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:59.024464 ignition[1309]: INFO : Ignition 2.20.0 Sep 4 23:44:59.024464 ignition[1309]: INFO : Stage: files Sep 4 23:44:59.033729 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:59.033729 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:59.033729 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:59.042786 ignition[1309]: INFO : PUT result: OK Sep 4 23:44:59.040495 systemd-networkd[1117]: eth0: Gained IPv6LL Sep 4 23:44:59.050645 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:44:59.055024 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:44:59.055024 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:44:59.066430 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:44:59.070529 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:44:59.074853 unknown[1309]: wrote ssh authorized keys file for user: core Sep 4 23:44:59.078146 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:44:59.088525 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:44:59.088525 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:59.171805 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:44:59.448134 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 4 23:44:59.448134 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:59.448134 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:59.534003 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:44:59.674449 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:44:59.679544 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 4 23:44:59.973377 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:45:00.356177 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 4 23:45:00.356177 ignition[1309]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:00.372424 ignition[1309]: INFO : files: files passed Sep 4 23:45:00.372424 ignition[1309]: INFO : Ignition finished successfully Sep 4 23:45:00.392976 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:45:00.439486 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:45:00.453410 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:45:00.460995 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:45:00.464201 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:45:00.491825 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:00.491825 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:00.505002 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:00.503098 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:00.520276 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:45:00.535527 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:45:00.595780 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:45:00.595987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:45:00.598502 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:45:00.598903 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:45:00.608742 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:45:00.624476 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:45:00.654586 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:00.674463 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:45:00.698212 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:00.701562 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:00.704826 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:45:00.707282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:45:00.707511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:00.713805 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:45:00.719407 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:45:00.721895 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:45:00.724797 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:45:00.727969 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:45:00.731089 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:45:00.733911 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:45:00.737242 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:45:00.740033 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:45:00.740738 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:45:00.741161 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:45:00.741384 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:45:00.741953 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:00.742751 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:00.743152 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:45:00.752590 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:00.752790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:45:00.753028 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:45:00.764913 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:45:00.765193 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:00.771487 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:45:00.771690 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:45:00.804747 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:45:00.811819 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:45:00.812094 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:00.824400 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:45:00.826692 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:45:00.826939 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:00.834266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:45:00.835173 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:45:00.869355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:45:00.870129 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:45:00.903474 ignition[1362]: INFO : Ignition 2.20.0 Sep 4 23:45:00.903474 ignition[1362]: INFO : Stage: umount Sep 4 23:45:00.903474 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:00.903474 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:00.903474 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:00.917228 ignition[1362]: INFO : PUT result: OK Sep 4 23:45:00.904083 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:45:00.922452 ignition[1362]: INFO : umount: umount passed Sep 4 23:45:00.922452 ignition[1362]: INFO : Ignition finished successfully Sep 4 23:45:00.932957 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:45:00.933535 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:45:00.942993 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:45:00.944235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:45:00.948396 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:45:00.948503 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:45:00.951173 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:45:00.951257 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:45:00.953928 systemd[1]: Stopped target network.target - Network. Sep 4 23:45:00.956265 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:45:00.956355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:45:00.959407 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:45:00.963939 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:45:00.973715 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:00.977217 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:45:00.979489 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:45:00.985471 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:45:00.985559 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:45:00.988105 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:45:00.988174 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:45:00.990849 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:45:00.990933 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:45:00.993624 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:45:00.993702 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:45:00.996692 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:45:00.999453 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:45:01.005798 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:45:01.006014 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:45:01.054029 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:45:01.056633 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:45:01.066460 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:45:01.071793 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:45:01.071970 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:45:01.077312 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:45:01.077519 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:01.090982 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:01.091889 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:45:01.092145 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:45:01.101618 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:45:01.102662 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:45:01.102774 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:01.125264 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:45:01.128116 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:45:01.128262 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:45:01.132057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:45:01.132200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:01.149504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:45:01.149626 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:01.152455 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:01.160430 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:45:01.185053 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:45:01.185371 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:45:01.202462 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:45:01.202849 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:01.212408 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:45:01.212534 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:01.215444 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:45:01.215531 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:01.218400 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:45:01.218529 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:45:01.226023 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:45:01.226186 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:45:01.247599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:45:01.247740 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:01.260468 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:45:01.263485 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:45:01.263637 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:01.271904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:45:01.272037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:01.305625 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:45:01.306113 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:45:01.315156 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:45:01.328562 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:45:01.347200 systemd[1]: Switching root. Sep 4 23:45:01.418168 systemd-journald[250]: Journal stopped Sep 4 23:45:04.144656 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 4 23:45:04.144785 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:45:04.144830 kernel: SELinux: policy capability open_perms=1 Sep 4 23:45:04.144862 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:45:04.144891 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:45:04.144920 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:45:04.144953 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:45:04.144985 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:45:04.145024 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:45:04.145056 kernel: audit: type=1403 audit(1757029501.825:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:45:04.145139 systemd[1]: Successfully loaded SELinux policy in 75.482ms. Sep 4 23:45:04.145187 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.788ms. Sep 4 23:45:04.145223 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:45:04.145253 systemd[1]: Detected virtualization amazon. Sep 4 23:45:04.145284 systemd[1]: Detected architecture arm64. Sep 4 23:45:04.145318 systemd[1]: Detected first boot. Sep 4 23:45:04.145349 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:45:04.145378 zram_generator::config[1406]: No configuration found. Sep 4 23:45:04.145409 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:45:04.145439 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:45:04.145471 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:45:04.145503 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:45:04.145534 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:45:04.145570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:04.145602 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:45:04.145634 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:45:04.145663 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:45:04.145694 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:45:04.145723 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:45:04.145763 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:45:04.145796 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:45:04.145825 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:45:04.145858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:04.145890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:04.145919 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:45:04.145948 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:45:04.145979 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:45:04.146011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:45:04.146042 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:45:04.146121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:04.146157 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:45:04.146193 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:45:04.146224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:45:04.146255 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:45:04.146286 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:04.146317 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:45:04.146348 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:45:04.146378 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:45:04.146409 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:45:04.146442 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:45:04.146471 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:45:04.146500 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:04.146531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:04.146560 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:04.146588 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:45:04.146618 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:45:04.146647 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:45:04.146678 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:45:04.146713 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:45:04.146742 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:45:04.146773 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:45:04.146803 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:45:04.146833 systemd[1]: Reached target machines.target - Containers. Sep 4 23:45:04.146863 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:45:04.146895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:04.146925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:45:04.146959 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:45:04.147000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:04.147030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:04.147060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:04.147129 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:45:04.147160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:04.147189 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:45:04.147220 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:45:04.147249 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:45:04.147283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:45:04.147312 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:45:04.147342 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:04.147371 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:45:04.147398 kernel: fuse: init (API version 7.39) Sep 4 23:45:04.147428 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:45:04.147457 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:45:04.147486 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:45:04.147520 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:45:04.147559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:45:04.147590 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:45:04.147621 systemd[1]: Stopped verity-setup.service. Sep 4 23:45:04.147652 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:45:04.147685 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:45:04.147715 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:45:04.147747 kernel: loop: module loaded Sep 4 23:45:04.147775 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:45:04.147803 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:45:04.147831 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:45:04.147860 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:04.147888 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:45:04.147916 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:45:04.147948 kernel: ACPI: bus type drm_connector registered Sep 4 23:45:04.147976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:04.148007 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:04.148041 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:04.148091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:04.148129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:04.148159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:04.148189 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:45:04.148218 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:45:04.148247 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:04.148276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:04.148304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:04.148333 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:45:04.148362 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:45:04.148399 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:45:04.148430 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:45:04.148460 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:45:04.148489 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:45:04.148520 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:45:04.148593 systemd-journald[1496]: Collecting audit messages is disabled. Sep 4 23:45:04.148651 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:45:04.148687 systemd-journald[1496]: Journal started Sep 4 23:45:04.148736 systemd-journald[1496]: Runtime Journal (/run/log/journal/ec26565d59003201a1a6aebf0cf213b7) is 8M, max 75.3M, 67.3M free. Sep 4 23:45:03.261458 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:45:03.274405 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 23:45:03.275334 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:45:04.157135 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:45:04.178934 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:45:04.183118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:04.203131 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:45:04.203249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:04.216636 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:45:04.220352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:04.231995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:45:04.251586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:45:04.251690 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:45:04.253976 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:45:04.257738 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:45:04.263962 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:45:04.274832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:45:04.284267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:45:04.322472 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:45:04.340345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:04.352123 kernel: loop0: detected capacity change from 0 to 207008 Sep 4 23:45:04.372380 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:45:04.386495 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:45:04.406791 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:45:04.415480 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:45:04.422638 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:45:04.470335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:04.488611 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:45:04.486676 udevadm[1550]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:45:04.499772 systemd-journald[1496]: Time spent on flushing to /var/log/journal/ec26565d59003201a1a6aebf0cf213b7 is 103.504ms for 927 entries. Sep 4 23:45:04.499772 systemd-journald[1496]: System Journal (/var/log/journal/ec26565d59003201a1a6aebf0cf213b7) is 8M, max 195.6M, 187.6M free. Sep 4 23:45:04.622422 systemd-journald[1496]: Received client request to flush runtime journal. Sep 4 23:45:04.622550 kernel: loop1: detected capacity change from 0 to 113512 Sep 4 23:45:04.516832 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:45:04.523198 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:45:04.566585 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:45:04.580558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:45:04.628848 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:45:04.647632 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Sep 4 23:45:04.647674 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Sep 4 23:45:04.662825 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:04.699185 kernel: loop2: detected capacity change from 0 to 53784 Sep 4 23:45:04.830111 kernel: loop3: detected capacity change from 0 to 123192 Sep 4 23:45:04.948117 kernel: loop4: detected capacity change from 0 to 207008 Sep 4 23:45:04.981108 kernel: loop5: detected capacity change from 0 to 113512 Sep 4 23:45:05.001119 kernel: loop6: detected capacity change from 0 to 53784 Sep 4 23:45:05.026133 kernel: loop7: detected capacity change from 0 to 123192 Sep 4 23:45:05.037002 (sd-merge)[1566]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 23:45:05.038843 (sd-merge)[1566]: Merged extensions into '/usr'. Sep 4 23:45:05.054773 systemd[1]: Reload requested from client PID 1522 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:45:05.054814 systemd[1]: Reloading... Sep 4 23:45:05.181103 zram_generator::config[1592]: No configuration found. Sep 4 23:45:05.577552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:05.622053 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:45:05.765798 systemd[1]: Reloading finished in 710 ms. Sep 4 23:45:05.796029 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:45:05.800249 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:45:05.804694 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:45:05.823825 systemd[1]: Starting ensure-sysext.service... Sep 4 23:45:05.838574 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:45:05.847433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:05.887293 systemd[1]: Reload requested from client PID 1647 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:45:05.887319 systemd[1]: Reloading... Sep 4 23:45:05.933747 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:45:05.940506 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:45:05.945800 systemd-udevd[1649]: Using default interface naming scheme 'v255'. Sep 4 23:45:05.946361 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:45:05.946985 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Sep 4 23:45:05.947217 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Sep 4 23:45:05.980679 systemd-tmpfiles[1648]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:05.980713 systemd-tmpfiles[1648]: Skipping /boot Sep 4 23:45:06.084667 systemd-tmpfiles[1648]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:06.084701 systemd-tmpfiles[1648]: Skipping /boot Sep 4 23:45:06.158109 zram_generator::config[1699]: No configuration found. Sep 4 23:45:06.355356 (udev-worker)[1701]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:45:06.592023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:06.733106 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1704) Sep 4 23:45:06.844949 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:45:06.845863 systemd[1]: Reloading finished in 957 ms. Sep 4 23:45:06.867610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:06.908947 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:06.976361 systemd[1]: Finished ensure-sysext.service. Sep 4 23:45:07.041487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:45:07.047376 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:45:07.075494 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:07.096428 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:45:07.104635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:07.119471 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:45:07.129376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:07.140497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:07.151475 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:07.164698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:07.168141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:07.176578 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:45:07.181336 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:07.186477 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:45:07.200463 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:45:07.212444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:45:07.216509 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:45:07.224598 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:45:07.231560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:07.242113 lvm[1849]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:07.243222 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:07.247254 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:07.338213 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:45:07.354176 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:45:07.372925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:07.373505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:07.389325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:07.389771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:07.390770 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:07.391295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:07.392027 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:45:07.415745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:07.433612 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:45:07.433804 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:07.434903 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:07.438267 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:45:07.452987 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:45:07.455242 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:45:07.484308 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:45:07.490799 lvm[1887]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:07.491792 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:45:07.492477 augenrules[1893]: No rules Sep 4 23:45:07.498406 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:07.501179 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:07.530906 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:45:07.558688 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:45:07.588848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:07.602463 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:45:07.725494 systemd-networkd[1864]: lo: Link UP Sep 4 23:45:07.726038 systemd-networkd[1864]: lo: Gained carrier Sep 4 23:45:07.729682 systemd-networkd[1864]: Enumeration completed Sep 4 23:45:07.730126 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:45:07.735912 systemd-networkd[1864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:07.735921 systemd-networkd[1864]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:07.740763 systemd-networkd[1864]: eth0: Link UP Sep 4 23:45:07.741350 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:45:07.745006 systemd-resolved[1865]: Positive Trust Anchors: Sep 4 23:45:07.745031 systemd-resolved[1865]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:45:07.745115 systemd-resolved[1865]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:45:07.749314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:45:07.755041 systemd-networkd[1864]: eth0: Gained carrier Sep 4 23:45:07.756181 systemd-networkd[1864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:07.769207 systemd-networkd[1864]: eth0: DHCPv4 address 172.31.17.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:45:07.771317 systemd-resolved[1865]: Defaulting to hostname 'linux'. Sep 4 23:45:07.775535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:45:07.778735 systemd[1]: Reached target network.target - Network. Sep 4 23:45:07.782942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:07.785923 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:07.792261 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:45:07.798171 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:45:07.801849 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:45:07.804904 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:45:07.808160 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:45:07.811985 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:45:07.812056 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:45:07.817823 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:45:07.822019 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:45:07.829655 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:45:07.837583 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:45:07.841507 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:45:07.845023 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:45:07.866215 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:45:07.869692 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:45:07.876184 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:45:07.879721 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:45:07.883834 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:45:07.887036 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:07.889728 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:07.889792 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:07.892250 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:45:07.899496 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:45:07.915439 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:45:07.923273 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:45:07.929832 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:45:07.933255 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:45:07.943492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:45:07.952853 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 23:45:07.960290 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:45:07.967405 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 23:45:07.976466 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:45:07.985657 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:45:07.999505 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:45:08.004476 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:45:08.007772 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:45:08.013520 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:45:08.023385 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:45:08.076539 jq[1920]: false Sep 4 23:45:08.090990 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:45:08.093212 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:45:08.166641 tar[1939]: linux-arm64/LICENSE Sep 4 23:45:08.167505 tar[1939]: linux-arm64/helm Sep 4 23:45:08.174774 jq[1932]: true Sep 4 23:45:08.192213 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:45:08.192729 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:45:08.199102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:45:08.199702 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:45:08.250225 extend-filesystems[1921]: Found loop4 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found loop5 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found loop6 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found loop7 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found nvme0n1 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found nvme0n1p1 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found nvme0n1p2 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found nvme0n1p3 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found usr Sep 4 23:45:08.250225 extend-filesystems[1921]: Found nvme0n1p4 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found nvme0n1p6 Sep 4 23:45:08.250225 extend-filesystems[1921]: Found nvme0n1p7 Sep 4 23:45:08.447594 jq[1950]: true Sep 4 23:45:08.447815 extend-filesystems[1921]: Found nvme0n1p9 Sep 4 23:45:08.447815 extend-filesystems[1921]: Checking size of /dev/nvme0n1p9 Sep 4 23:45:08.516234 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 23:45:08.263010 (ntainerd)[1951]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:45:08.271417 dbus-daemon[1919]: [system] SELinux support is enabled Sep 4 23:45:08.517540 coreos-metadata[1918]: Sep 04 23:45:08.480 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:45:08.517540 coreos-metadata[1918]: Sep 04 23:45:08.506 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:39:02 UTC 2025 (1): Starting Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: ---------------------------------------------------- Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: corporation. Support and training for ntp-4 are Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: available at https://www.nwtime.org/support Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: ---------------------------------------------------- Sep 4 23:45:08.529906 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: proto: precision = 0.096 usec (-23) Sep 4 23:45:08.530888 extend-filesystems[1921]: Resized partition /dev/nvme0n1p9 Sep 4 23:45:08.548205 update_engine[1929]: I20250904 23:45:08.459189 1929 main.cc:92] Flatcar Update Engine starting Sep 4 23:45:08.548205 update_engine[1929]: I20250904 23:45:08.469408 1929 update_check_scheduler.cc:74] Next update check in 4m9s Sep 4 23:45:08.271773 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:45:08.338516 dbus-daemon[1919]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1864 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.519 INFO Fetch successful Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.524 INFO Fetch successful Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.524 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.527 INFO Fetch successful Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.535 INFO Fetch successful Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.537 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.547 INFO Fetch failed with 404: resource not found Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.547 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.549 INFO Fetch successful Sep 4 23:45:08.555311 coreos-metadata[1918]: Sep 04 23:45:08.550 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 23:45:08.555920 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: basedate set to 2025-08-23 Sep 4 23:45:08.555920 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: gps base set to 2025-08-24 (week 2381) Sep 4 23:45:08.556021 extend-filesystems[1975]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:45:08.603576 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 23:45:08.290895 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:45:08.515616 ntpd[1923]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:39:02 UTC 2025 (1): Starting Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Listen normally on 3 eth0 172.31.17.142:123 Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Listen normally on 4 lo [::1]:123 Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: bind(21) AF_INET6 fe80::438:34ff:fef4:88fb%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: unable to create socket on eth0 (5) for fe80::438:34ff:fef4:88fb%2#123 Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: failed to init interface for address fe80::438:34ff:fef4:88fb%2 Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: Listening on routing socket on fd #21 for interface updates Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:08.643814 ntpd[1923]: 4 Sep 23:45:08 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:08.644511 extend-filesystems[1975]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 23:45:08.644511 extend-filesystems[1975]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:45:08.644511 extend-filesystems[1975]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 23:45:08.683843 coreos-metadata[1918]: Sep 04 23:45:08.559 INFO Fetch successful Sep 4 23:45:08.683843 coreos-metadata[1918]: Sep 04 23:45:08.559 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 23:45:08.683843 coreos-metadata[1918]: Sep 04 23:45:08.565 INFO Fetch successful Sep 4 23:45:08.683843 coreos-metadata[1918]: Sep 04 23:45:08.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 23:45:08.683843 coreos-metadata[1918]: Sep 04 23:45:08.578 INFO Fetch successful Sep 4 23:45:08.683843 coreos-metadata[1918]: Sep 04 23:45:08.578 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 23:45:08.683843 coreos-metadata[1918]: Sep 04 23:45:08.582 INFO Fetch successful Sep 4 23:45:08.290950 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:45:08.515675 ntpd[1923]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:45:08.692909 extend-filesystems[1921]: Resized filesystem in /dev/nvme0n1p9 Sep 4 23:45:08.315591 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:45:08.515703 ntpd[1923]: ---------------------------------------------------- Sep 4 23:45:08.315634 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:45:08.515801 ntpd[1923]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:45:08.716847 bash[1994]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:08.354672 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:45:08.515825 ntpd[1923]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:45:08.364891 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 23:45:08.515845 ntpd[1923]: corporation. Support and training for ntp-4 are Sep 4 23:45:08.435185 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 23:45:08.515864 ntpd[1923]: available at https://www.nwtime.org/support Sep 4 23:45:08.465918 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:45:08.515884 ntpd[1923]: ---------------------------------------------------- Sep 4 23:45:08.508617 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:45:08.522478 ntpd[1923]: proto: precision = 0.096 usec (-23) Sep 4 23:45:08.634636 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:45:08.541517 ntpd[1923]: basedate set to 2025-08-23 Sep 4 23:45:08.635932 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:45:08.541558 ntpd[1923]: gps base set to 2025-08-24 (week 2381) Sep 4 23:45:08.719719 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:45:08.563722 ntpd[1923]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:45:08.740930 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:45:08.563895 ntpd[1923]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:45:08.579347 ntpd[1923]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:45:08.579435 ntpd[1923]: Listen normally on 3 eth0 172.31.17.142:123 Sep 4 23:45:08.579507 ntpd[1923]: Listen normally on 4 lo [::1]:123 Sep 4 23:45:08.579591 ntpd[1923]: bind(21) AF_INET6 fe80::438:34ff:fef4:88fb%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:08.579632 ntpd[1923]: unable to create socket on eth0 (5) for fe80::438:34ff:fef4:88fb%2#123 Sep 4 23:45:08.579661 ntpd[1923]: failed to init interface for address fe80::438:34ff:fef4:88fb%2 Sep 4 23:45:08.579728 ntpd[1923]: Listening on routing socket on fd #21 for interface updates Sep 4 23:45:08.630644 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:08.630707 ntpd[1923]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:08.755999 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:45:08.770527 systemd[1]: Starting sshkeys.service... Sep 4 23:45:08.855997 systemd-logind[1928]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 23:45:08.860777 systemd-logind[1928]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 23:45:08.864247 systemd-logind[1928]: New seat seat0. Sep 4 23:45:08.867434 containerd[1951]: time="2025-09-04T23:45:08.864132013Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:45:08.873582 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:45:08.896144 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:45:08.919818 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1723) Sep 4 23:45:08.993431 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:45:09.081928 containerd[1951]: time="2025-09-04T23:45:09.080229190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:09.087449 systemd-networkd[1864]: eth0: Gained IPv6LL Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.093411082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.093494074Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.093535246Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.093903274Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.093959386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.094168666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.094211902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.094650334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.094694302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.094727578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:09.095588 containerd[1951]: time="2025-09-04T23:45:09.094753750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:09.096200 containerd[1951]: time="2025-09-04T23:45:09.094978054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:09.100921 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:45:09.108357 containerd[1951]: time="2025-09-04T23:45:09.107119294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:09.108357 containerd[1951]: time="2025-09-04T23:45:09.107491054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:09.108357 containerd[1951]: time="2025-09-04T23:45:09.107535886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:45:09.108357 containerd[1951]: time="2025-09-04T23:45:09.107786050Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:45:09.108357 containerd[1951]: time="2025-09-04T23:45:09.107929390Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:45:09.109445 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:45:09.124682 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 23:45:09.138055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:09.152857 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:45:09.161458 containerd[1951]: time="2025-09-04T23:45:09.161186866Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:45:09.163114 containerd[1951]: time="2025-09-04T23:45:09.161808826Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:45:09.163114 containerd[1951]: time="2025-09-04T23:45:09.161909830Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:45:09.163114 containerd[1951]: time="2025-09-04T23:45:09.161955118Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:45:09.163114 containerd[1951]: time="2025-09-04T23:45:09.162019990Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:45:09.164606 containerd[1951]: time="2025-09-04T23:45:09.163379650Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:45:09.173143 containerd[1951]: time="2025-09-04T23:45:09.169400674Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:45:09.173143 containerd[1951]: time="2025-09-04T23:45:09.169870330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:45:09.173143 containerd[1951]: time="2025-09-04T23:45:09.170184694Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:45:09.173143 containerd[1951]: time="2025-09-04T23:45:09.170277442Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:45:09.173143 containerd[1951]: time="2025-09-04T23:45:09.170802886Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173143 containerd[1951]: time="2025-09-04T23:45:09.170843398Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173143 containerd[1951]: time="2025-09-04T23:45:09.170912110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173175454Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173255194Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173299510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173332174Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173362414Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173407714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173440762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173476498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173511226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173538 containerd[1951]: time="2025-09-04T23:45:09.173541862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173573506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173602414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173634010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173665546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173726998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173761378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173790778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173820286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173856946Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173908990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.173960 containerd[1951]: time="2025-09-04T23:45:09.173944702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.174457 containerd[1951]: time="2025-09-04T23:45:09.173975050Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.174574702Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.175361434Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.175440706Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.175479922Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.175540294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.175575454Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.175630858Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:45:09.179668 containerd[1951]: time="2025-09-04T23:45:09.175661830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:45:09.180175 containerd[1951]: time="2025-09-04T23:45:09.179668534Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:45:09.180175 containerd[1951]: time="2025-09-04T23:45:09.179837350Z" level=info msg="Connect containerd service" Sep 4 23:45:09.180175 containerd[1951]: time="2025-09-04T23:45:09.179955910Z" level=info msg="using legacy CRI server" Sep 4 23:45:09.180175 containerd[1951]: time="2025-09-04T23:45:09.179981374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:45:09.183912 containerd[1951]: time="2025-09-04T23:45:09.180771202Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:45:09.189113 containerd[1951]: time="2025-09-04T23:45:09.184850602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:45:09.189113 containerd[1951]: time="2025-09-04T23:45:09.185608930Z" level=info msg="Start subscribing containerd event" Sep 4 23:45:09.189113 containerd[1951]: time="2025-09-04T23:45:09.185733418Z" level=info msg="Start recovering state" Sep 4 23:45:09.189113 containerd[1951]: time="2025-09-04T23:45:09.185900518Z" level=info msg="Start event monitor" Sep 4 23:45:09.189113 containerd[1951]: time="2025-09-04T23:45:09.185936458Z" level=info msg="Start snapshots syncer" Sep 4 23:45:09.189113 containerd[1951]: time="2025-09-04T23:45:09.185968306Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:45:09.189113 containerd[1951]: time="2025-09-04T23:45:09.185992858Z" level=info msg="Start streaming server" Sep 4 23:45:09.192798 containerd[1951]: time="2025-09-04T23:45:09.192690034Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:45:09.193019 containerd[1951]: time="2025-09-04T23:45:09.192834826Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:45:09.197810 containerd[1951]: time="2025-09-04T23:45:09.196177138Z" level=info msg="containerd successfully booted in 0.339920s" Sep 4 23:45:09.208110 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:45:09.227878 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 23:45:09.248656 dbus-daemon[1919]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 23:45:09.254406 dbus-daemon[1919]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1967 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 23:45:09.261567 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 23:45:09.305757 coreos-metadata[2012]: Sep 04 23:45:09.304 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:45:09.313094 coreos-metadata[2012]: Sep 04 23:45:09.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 23:45:09.314258 coreos-metadata[2012]: Sep 04 23:45:09.313 INFO Fetch successful Sep 4 23:45:09.314258 coreos-metadata[2012]: Sep 04 23:45:09.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 23:45:09.324058 coreos-metadata[2012]: Sep 04 23:45:09.322 INFO Fetch successful Sep 4 23:45:09.327451 unknown[2012]: wrote ssh authorized keys file for user: core Sep 4 23:45:09.416104 update-ssh-keys[2070]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:09.420470 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:45:09.425059 polkitd[2051]: Started polkitd version 121 Sep 4 23:45:09.439436 systemd[1]: Finished sshkeys.service. Sep 4 23:45:09.445889 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:45:09.466103 amazon-ssm-agent[2036]: Initializing new seelog logger Sep 4 23:45:09.472109 amazon-ssm-agent[2036]: New Seelog Logger Creation Complete Sep 4 23:45:09.472109 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.472109 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.474100 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 processing appconfig overrides Sep 4 23:45:09.476700 polkitd[2051]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 23:45:09.477175 polkitd[2051]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 23:45:09.479380 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.479380 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.479380 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 processing appconfig overrides Sep 4 23:45:09.479380 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.479380 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.479380 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 processing appconfig overrides Sep 4 23:45:09.481883 polkitd[2051]: Finished loading, compiling and executing 2 rules Sep 4 23:45:09.485402 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO Proxy environment variables: Sep 4 23:45:09.486552 dbus-daemon[1919]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 23:45:09.487029 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 23:45:09.487567 polkitd[2051]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 23:45:09.509187 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.509187 amazon-ssm-agent[2036]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:09.509187 amazon-ssm-agent[2036]: 2025/09/04 23:45:09 processing appconfig overrides Sep 4 23:45:09.553830 systemd-hostnamed[1967]: Hostname set to (transient) Sep 4 23:45:09.554050 systemd-resolved[1865]: System hostname changed to 'ip-172-31-17-142'. Sep 4 23:45:09.591119 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO no_proxy: Sep 4 23:45:09.621787 locksmithd[1983]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:45:09.690019 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO https_proxy: Sep 4 23:45:09.800178 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO http_proxy: Sep 4 23:45:09.898609 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO Checking if agent identity type OnPrem can be assumed Sep 4 23:45:09.996493 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO Checking if agent identity type EC2 can be assumed Sep 4 23:45:10.095248 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO Agent will take identity from EC2 Sep 4 23:45:10.196212 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:45:10.299103 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:45:10.399862 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:45:10.500463 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 23:45:10.600993 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 23:45:10.705096 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 23:45:10.804230 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 23:45:10.856618 tar[1939]: linux-arm64/README.md Sep 4 23:45:10.881203 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:45:10.904514 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [Registrar] Starting registrar module Sep 4 23:45:11.006674 amazon-ssm-agent[2036]: 2025-09-04 23:45:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 23:45:11.143646 sshd_keygen[1966]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:45:11.188500 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:45:11.208284 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:45:11.220610 systemd[1]: Started sshd@0-172.31.17.142:22-139.178.89.65:58682.service - OpenSSH per-connection server daemon (139.178.89.65:58682). Sep 4 23:45:11.250870 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:45:11.253114 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:45:11.269631 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:45:11.314825 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:45:11.333701 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:45:11.347665 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:45:11.352191 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:45:11.496931 amazon-ssm-agent[2036]: 2025-09-04 23:45:11 INFO [EC2Identity] EC2 registration was successful. Sep 4 23:45:11.505221 sshd[2157]: Accepted publickey for core from 139.178.89.65 port 58682 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:11.510330 sshd-session[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:11.517861 ntpd[1923]: Listen normally on 6 eth0 [fe80::438:34ff:fef4:88fb%2]:123 Sep 4 23:45:11.518364 ntpd[1923]: 4 Sep 23:45:11 ntpd[1923]: Listen normally on 6 eth0 [fe80::438:34ff:fef4:88fb%2]:123 Sep 4 23:45:11.529057 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:45:11.538949 amazon-ssm-agent[2036]: 2025-09-04 23:45:11 INFO [CredentialRefresher] credentialRefresher has started Sep 4 23:45:11.538949 amazon-ssm-agent[2036]: 2025-09-04 23:45:11 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 23:45:11.538949 amazon-ssm-agent[2036]: 2025-09-04 23:45:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 23:45:11.539901 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:45:11.564430 systemd-logind[1928]: New session 1 of user core. Sep 4 23:45:11.581220 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:45:11.596975 amazon-ssm-agent[2036]: 2025-09-04 23:45:11 INFO [CredentialRefresher] Next credential rotation will be in 32.4499834877 minutes Sep 4 23:45:11.597813 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:45:11.628454 (systemd)[2168]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:45:11.635931 systemd-logind[1928]: New session c1 of user core. Sep 4 23:45:11.939453 systemd[2168]: Queued start job for default target default.target. Sep 4 23:45:11.949457 systemd[2168]: Created slice app.slice - User Application Slice. Sep 4 23:45:11.949523 systemd[2168]: Reached target paths.target - Paths. Sep 4 23:45:11.949725 systemd[2168]: Reached target timers.target - Timers. Sep 4 23:45:11.952395 systemd[2168]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:45:11.983824 systemd[2168]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:45:11.984093 systemd[2168]: Reached target sockets.target - Sockets. Sep 4 23:45:11.984384 systemd[2168]: Reached target basic.target - Basic System. Sep 4 23:45:11.984557 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:45:11.987627 systemd[2168]: Reached target default.target - Main User Target. Sep 4 23:45:11.987698 systemd[2168]: Startup finished in 338ms. Sep 4 23:45:11.996339 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:45:12.105398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:12.112900 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:45:12.113813 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:12.125242 systemd[1]: Startup finished in 1.098s (kernel) + 9.004s (initrd) + 10.374s (userspace) = 20.477s. Sep 4 23:45:12.183308 systemd[1]: Started sshd@1-172.31.17.142:22-139.178.89.65:39040.service - OpenSSH per-connection server daemon (139.178.89.65:39040). Sep 4 23:45:12.377692 sshd[2189]: Accepted publickey for core from 139.178.89.65 port 39040 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:12.380815 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:12.391617 systemd-logind[1928]: New session 2 of user core. Sep 4 23:45:12.397400 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:45:12.525755 sshd[2196]: Connection closed by 139.178.89.65 port 39040 Sep 4 23:45:12.525025 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:12.530719 systemd[1]: sshd@1-172.31.17.142:22-139.178.89.65:39040.service: Deactivated successfully. Sep 4 23:45:12.534840 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:45:12.541196 systemd-logind[1928]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:45:12.564881 systemd-logind[1928]: Removed session 2. Sep 4 23:45:12.575623 systemd[1]: Started sshd@2-172.31.17.142:22-139.178.89.65:39056.service - OpenSSH per-connection server daemon (139.178.89.65:39056). Sep 4 23:45:12.581540 amazon-ssm-agent[2036]: 2025-09-04 23:45:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 23:45:12.682118 amazon-ssm-agent[2036]: 2025-09-04 23:45:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2204) started Sep 4 23:45:12.770226 sshd[2202]: Accepted publickey for core from 139.178.89.65 port 39056 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:12.774187 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:12.784990 amazon-ssm-agent[2036]: 2025-09-04 23:45:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 23:45:12.789692 systemd-logind[1928]: New session 3 of user core. Sep 4 23:45:12.802360 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:45:12.928532 sshd[2211]: Connection closed by 139.178.89.65 port 39056 Sep 4 23:45:12.929417 sshd-session[2202]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:12.935666 systemd[1]: sshd@2-172.31.17.142:22-139.178.89.65:39056.service: Deactivated successfully. Sep 4 23:45:12.940629 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:45:12.945668 systemd-logind[1928]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:45:12.948220 systemd-logind[1928]: Removed session 3. Sep 4 23:45:12.975649 systemd[1]: Started sshd@3-172.31.17.142:22-139.178.89.65:39072.service - OpenSSH per-connection server daemon (139.178.89.65:39072). Sep 4 23:45:13.210476 sshd[2222]: Accepted publickey for core from 139.178.89.65 port 39072 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:13.212784 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:13.223998 systemd-logind[1928]: New session 4 of user core. Sep 4 23:45:13.230391 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:45:13.361808 sshd[2224]: Connection closed by 139.178.89.65 port 39072 Sep 4 23:45:13.362349 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:13.369468 systemd-logind[1928]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:45:13.370866 systemd[1]: sshd@3-172.31.17.142:22-139.178.89.65:39072.service: Deactivated successfully. Sep 4 23:45:13.374819 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:45:13.380467 systemd-logind[1928]: Removed session 4. Sep 4 23:45:13.406512 systemd[1]: Started sshd@4-172.31.17.142:22-139.178.89.65:39084.service - OpenSSH per-connection server daemon (139.178.89.65:39084). Sep 4 23:45:13.436578 kubelet[2182]: E0904 23:45:13.436495 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:13.442163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:13.442659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:13.443655 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 258.5M memory peak. Sep 4 23:45:13.605790 sshd[2230]: Accepted publickey for core from 139.178.89.65 port 39084 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:13.607922 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:13.618464 systemd-logind[1928]: New session 5 of user core. Sep 4 23:45:13.627364 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:45:13.747948 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:45:13.748642 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:13.767831 sudo[2234]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:13.791458 sshd[2233]: Connection closed by 139.178.89.65 port 39084 Sep 4 23:45:13.792572 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:13.800666 systemd[1]: sshd@4-172.31.17.142:22-139.178.89.65:39084.service: Deactivated successfully. Sep 4 23:45:13.804023 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:45:13.805678 systemd-logind[1928]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:45:13.807547 systemd-logind[1928]: Removed session 5. Sep 4 23:45:13.831729 systemd[1]: Started sshd@5-172.31.17.142:22-139.178.89.65:39092.service - OpenSSH per-connection server daemon (139.178.89.65:39092). Sep 4 23:45:14.014115 sshd[2240]: Accepted publickey for core from 139.178.89.65 port 39092 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:14.016887 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:14.026521 systemd-logind[1928]: New session 6 of user core. Sep 4 23:45:14.037420 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:45:14.142752 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:45:14.143558 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:14.149682 sudo[2244]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:14.159635 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:45:14.160327 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:14.181277 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:14.236643 augenrules[2266]: No rules Sep 4 23:45:14.239364 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:14.239871 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:14.242430 sudo[2243]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:14.266766 sshd[2242]: Connection closed by 139.178.89.65 port 39092 Sep 4 23:45:14.267735 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:14.274737 systemd[1]: sshd@5-172.31.17.142:22-139.178.89.65:39092.service: Deactivated successfully. Sep 4 23:45:14.278035 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:45:14.280150 systemd-logind[1928]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:45:14.282464 systemd-logind[1928]: Removed session 6. Sep 4 23:45:14.307577 systemd[1]: Started sshd@6-172.31.17.142:22-139.178.89.65:39106.service - OpenSSH per-connection server daemon (139.178.89.65:39106). Sep 4 23:45:14.489194 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 39106 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:14.491610 sshd-session[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:14.499658 systemd-logind[1928]: New session 7 of user core. Sep 4 23:45:14.510348 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:45:14.614452 sudo[2278]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:45:14.615249 sudo[2278]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:15.192577 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:45:15.205691 (dockerd)[2295]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:45:15.609991 dockerd[2295]: time="2025-09-04T23:45:15.609176658Z" level=info msg="Starting up" Sep 4 23:45:15.743107 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1934402106-merged.mount: Deactivated successfully. Sep 4 23:45:15.833796 dockerd[2295]: time="2025-09-04T23:45:15.833456023Z" level=info msg="Loading containers: start." Sep 4 23:45:16.104136 kernel: Initializing XFRM netlink socket Sep 4 23:45:16.135502 (udev-worker)[2319]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:45:16.234097 systemd-networkd[1864]: docker0: Link UP Sep 4 23:45:16.270413 dockerd[2295]: time="2025-09-04T23:45:16.270342710Z" level=info msg="Loading containers: done." Sep 4 23:45:16.297932 dockerd[2295]: time="2025-09-04T23:45:16.297836630Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:45:16.298308 dockerd[2295]: time="2025-09-04T23:45:16.298004390Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:45:16.298433 dockerd[2295]: time="2025-09-04T23:45:16.298403841Z" level=info msg="Daemon has completed initialization" Sep 4 23:45:16.353218 dockerd[2295]: time="2025-09-04T23:45:16.353121607Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:45:16.354458 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:45:16.738715 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck182905555-merged.mount: Deactivated successfully. Sep 4 23:45:17.709558 containerd[1951]: time="2025-09-04T23:45:17.709442114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:45:18.355931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850356250.mount: Deactivated successfully. Sep 4 23:45:19.733521 containerd[1951]: time="2025-09-04T23:45:19.733441234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:19.735597 containerd[1951]: time="2025-09-04T23:45:19.735509195Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 4 23:45:19.737345 containerd[1951]: time="2025-09-04T23:45:19.736408324Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:19.741959 containerd[1951]: time="2025-09-04T23:45:19.741897852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:19.748645 containerd[1951]: time="2025-09-04T23:45:19.748558360Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.039029312s" Sep 4 23:45:19.748870 containerd[1951]: time="2025-09-04T23:45:19.748839061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 4 23:45:19.751404 containerd[1951]: time="2025-09-04T23:45:19.751357884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:45:21.277764 containerd[1951]: time="2025-09-04T23:45:21.277701757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:21.281108 containerd[1951]: time="2025-09-04T23:45:21.281013780Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 4 23:45:21.281574 containerd[1951]: time="2025-09-04T23:45:21.281506962Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:21.287980 containerd[1951]: time="2025-09-04T23:45:21.287017716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:21.289444 containerd[1951]: time="2025-09-04T23:45:21.289386080Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.537800562s" Sep 4 23:45:21.289865 containerd[1951]: time="2025-09-04T23:45:21.289442028Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 4 23:45:21.290245 containerd[1951]: time="2025-09-04T23:45:21.290195236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:45:22.531111 containerd[1951]: time="2025-09-04T23:45:22.531029872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:22.533095 containerd[1951]: time="2025-09-04T23:45:22.532974772Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 4 23:45:22.533689 containerd[1951]: time="2025-09-04T23:45:22.533612399Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:22.540108 containerd[1951]: time="2025-09-04T23:45:22.539030106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:22.543430 containerd[1951]: time="2025-09-04T23:45:22.543371392Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.253008504s" Sep 4 23:45:22.543527 containerd[1951]: time="2025-09-04T23:45:22.543429633Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 4 23:45:22.544309 containerd[1951]: time="2025-09-04T23:45:22.544250123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:45:23.679313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:23.692569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:23.901389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761662403.mount: Deactivated successfully. Sep 4 23:45:24.133752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:24.144892 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:24.237575 kubelet[2562]: E0904 23:45:24.237329 2562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:24.248345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:24.248704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:24.250368 systemd[1]: kubelet.service: Consumed 361ms CPU time, 106.7M memory peak. Sep 4 23:45:24.642482 containerd[1951]: time="2025-09-04T23:45:24.642422888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:24.645099 containerd[1951]: time="2025-09-04T23:45:24.644128028Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 4 23:45:24.645099 containerd[1951]: time="2025-09-04T23:45:24.644468926Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:24.649553 containerd[1951]: time="2025-09-04T23:45:24.649490796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:24.651053 containerd[1951]: time="2025-09-04T23:45:24.651007454Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 2.10670053s" Sep 4 23:45:24.651244 containerd[1951]: time="2025-09-04T23:45:24.651211508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 4 23:45:24.652118 containerd[1951]: time="2025-09-04T23:45:24.652047678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:45:25.150329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105466633.mount: Deactivated successfully. Sep 4 23:45:26.291201 containerd[1951]: time="2025-09-04T23:45:26.291137593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.293338 containerd[1951]: time="2025-09-04T23:45:26.293264300Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 4 23:45:26.294105 containerd[1951]: time="2025-09-04T23:45:26.293802625Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.299793 containerd[1951]: time="2025-09-04T23:45:26.299710921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.302307 containerd[1951]: time="2025-09-04T23:45:26.302247201Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.650115901s" Sep 4 23:45:26.302737 containerd[1951]: time="2025-09-04T23:45:26.302452371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 23:45:26.303771 containerd[1951]: time="2025-09-04T23:45:26.303556635Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:45:26.790887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36968525.mount: Deactivated successfully. Sep 4 23:45:26.804383 containerd[1951]: time="2025-09-04T23:45:26.804289128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.806567 containerd[1951]: time="2025-09-04T23:45:26.806209848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 4 23:45:26.809416 containerd[1951]: time="2025-09-04T23:45:26.808785075Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.814117 containerd[1951]: time="2025-09-04T23:45:26.814036499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:26.815975 containerd[1951]: time="2025-09-04T23:45:26.815919532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.284606ms" Sep 4 23:45:26.816189 containerd[1951]: time="2025-09-04T23:45:26.816154298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:45:26.817208 containerd[1951]: time="2025-09-04T23:45:26.816987130Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:45:27.408629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844451261.mount: Deactivated successfully. Sep 4 23:45:30.018313 containerd[1951]: time="2025-09-04T23:45:30.018211368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.049304 containerd[1951]: time="2025-09-04T23:45:30.049207355Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 4 23:45:30.093218 containerd[1951]: time="2025-09-04T23:45:30.092644715Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.115255 containerd[1951]: time="2025-09-04T23:45:30.115162880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:30.117709 containerd[1951]: time="2025-09-04T23:45:30.117632995Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.30032797s" Sep 4 23:45:30.118126 containerd[1951]: time="2025-09-04T23:45:30.117910754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 4 23:45:34.427746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:45:34.435578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:34.835533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:34.841546 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:34.929888 kubelet[2707]: E0904 23:45:34.929598 2707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:34.936512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:34.936851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:34.937861 systemd[1]: kubelet.service: Consumed 309ms CPU time, 107.1M memory peak. Sep 4 23:45:36.728817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:36.729330 systemd[1]: kubelet.service: Consumed 309ms CPU time, 107.1M memory peak. Sep 4 23:45:36.759376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:36.810121 systemd[1]: Reload requested from client PID 2723 ('systemctl') (unit session-7.scope)... Sep 4 23:45:36.810367 systemd[1]: Reloading... Sep 4 23:45:37.141211 zram_generator::config[2775]: No configuration found. Sep 4 23:45:37.418240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:37.681549 systemd[1]: Reloading finished in 870 ms. Sep 4 23:45:37.785447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:37.790438 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:37.800404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:37.802000 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:37.802621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:37.802735 systemd[1]: kubelet.service: Consumed 282ms CPU time, 96.1M memory peak. Sep 4 23:45:37.814724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:38.165435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:38.178716 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:38.262354 kubelet[2835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:38.262843 kubelet[2835]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:38.262939 kubelet[2835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:38.263285 kubelet[2835]: I0904 23:45:38.263222 2835 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:39.344172 kubelet[2835]: I0904 23:45:39.342860 2835 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:45:39.344172 kubelet[2835]: I0904 23:45:39.342919 2835 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:39.344172 kubelet[2835]: I0904 23:45:39.343494 2835 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:45:39.391820 kubelet[2835]: E0904 23:45:39.391739 2835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:39.395681 kubelet[2835]: I0904 23:45:39.395379 2835 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:39.409125 kubelet[2835]: E0904 23:45:39.407796 2835 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:39.409125 kubelet[2835]: I0904 23:45:39.407948 2835 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:39.417094 kubelet[2835]: I0904 23:45:39.417031 2835 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:39.418881 kubelet[2835]: I0904 23:45:39.418787 2835 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:39.419450 kubelet[2835]: I0904 23:45:39.419126 2835 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:39.419930 kubelet[2835]: I0904 23:45:39.419893 2835 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:39.420166 kubelet[2835]: I0904 23:45:39.420138 2835 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:45:39.420708 kubelet[2835]: I0904 23:45:39.420663 2835 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:39.427600 kubelet[2835]: I0904 23:45:39.427550 2835 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:45:39.427825 kubelet[2835]: I0904 23:45:39.427801 2835 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:39.427954 kubelet[2835]: I0904 23:45:39.427935 2835 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:45:39.428105 kubelet[2835]: I0904 23:45:39.428049 2835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:39.435380 kubelet[2835]: W0904 23:45:39.435276 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-142&limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:39.435558 kubelet[2835]: E0904 23:45:39.435398 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-142&limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:39.436300 kubelet[2835]: W0904 23:45:39.436198 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:39.436300 kubelet[2835]: E0904 23:45:39.436302 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:39.436581 kubelet[2835]: I0904 23:45:39.436522 2835 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:39.439138 kubelet[2835]: I0904 23:45:39.437658 2835 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:45:39.439138 kubelet[2835]: W0904 23:45:39.437928 2835 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:45:39.440894 kubelet[2835]: I0904 23:45:39.440806 2835 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:45:39.442206 kubelet[2835]: I0904 23:45:39.440937 2835 server.go:1287] "Started kubelet" Sep 4 23:45:39.454671 kubelet[2835]: E0904 23:45:39.454099 2835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.142:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.142:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-142.1862390e963d7e08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-142,UID:ip-172-31-17-142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-142,},FirstTimestamp:2025-09-04 23:45:39.440852488 +0000 UTC m=+1.254209693,LastTimestamp:2025-09-04 23:45:39.440852488 +0000 UTC m=+1.254209693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-142,}" Sep 4 23:45:39.455969 kubelet[2835]: I0904 23:45:39.455898 2835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:39.457257 kubelet[2835]: I0904 23:45:39.457023 2835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:39.457983 kubelet[2835]: I0904 23:45:39.457934 2835 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:39.458356 kubelet[2835]: I0904 23:45:39.458296 2835 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:39.460474 kubelet[2835]: I0904 23:45:39.460418 2835 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:45:39.463363 kubelet[2835]: I0904 23:45:39.463294 2835 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:39.464433 kubelet[2835]: I0904 23:45:39.464358 2835 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:45:39.466122 kubelet[2835]: E0904 23:45:39.465799 2835 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-142\" not found" Sep 4 23:45:39.469369 kubelet[2835]: I0904 23:45:39.468660 2835 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:45:39.469369 kubelet[2835]: I0904 23:45:39.468797 2835 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:39.470416 kubelet[2835]: E0904 23:45:39.470226 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-142?timeout=10s\": dial tcp 172.31.17.142:6443: connect: connection refused" interval="200ms" Sep 4 23:45:39.471470 kubelet[2835]: W0904 23:45:39.471247 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:39.473158 kubelet[2835]: E0904 23:45:39.471531 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:39.473158 kubelet[2835]: E0904 23:45:39.471847 2835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.142:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.142:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-142.1862390e963d7e08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-142,UID:ip-172-31-17-142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-142,},FirstTimestamp:2025-09-04 23:45:39.440852488 +0000 UTC m=+1.254209693,LastTimestamp:2025-09-04 23:45:39.440852488 +0000 UTC m=+1.254209693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-142,}" Sep 4 23:45:39.473158 kubelet[2835]: E0904 23:45:39.472309 2835 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:39.473158 kubelet[2835]: I0904 23:45:39.472889 2835 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:45:39.473158 kubelet[2835]: I0904 23:45:39.473024 2835 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:39.477257 kubelet[2835]: I0904 23:45:39.477218 2835 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:45:39.509126 kubelet[2835]: I0904 23:45:39.509055 2835 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:45:39.509126 kubelet[2835]: I0904 23:45:39.509117 2835 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:39.510345 kubelet[2835]: I0904 23:45:39.509156 2835 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:39.513158 kubelet[2835]: I0904 23:45:39.512840 2835 policy_none.go:49] "None policy: Start" Sep 4 23:45:39.513158 kubelet[2835]: I0904 23:45:39.512887 2835 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:45:39.513158 kubelet[2835]: I0904 23:45:39.512915 2835 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:39.524754 kubelet[2835]: I0904 23:45:39.523540 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:39.529264 kubelet[2835]: I0904 23:45:39.528238 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:39.528338 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:45:39.530294 kubelet[2835]: I0904 23:45:39.529912 2835 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:45:39.530294 kubelet[2835]: I0904 23:45:39.529996 2835 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:45:39.530294 kubelet[2835]: I0904 23:45:39.530014 2835 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:45:39.530294 kubelet[2835]: E0904 23:45:39.530160 2835 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:39.538643 kubelet[2835]: W0904 23:45:39.536645 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:39.538643 kubelet[2835]: E0904 23:45:39.536722 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:39.552701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:45:39.563688 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:45:39.567802 kubelet[2835]: E0904 23:45:39.567749 2835 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-142\" not found" Sep 4 23:45:39.576803 kubelet[2835]: I0904 23:45:39.576662 2835 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:45:39.577095 kubelet[2835]: I0904 23:45:39.577015 2835 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:39.577221 kubelet[2835]: I0904 23:45:39.577057 2835 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:39.578104 kubelet[2835]: I0904 23:45:39.577770 2835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:39.580051 kubelet[2835]: E0904 23:45:39.579693 2835 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:45:39.580051 kubelet[2835]: E0904 23:45:39.579917 2835 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-142\" not found" Sep 4 23:45:39.589794 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 23:45:39.656437 systemd[1]: Created slice kubepods-burstable-pod499cb7640afccba841314472827a81c8.slice - libcontainer container kubepods-burstable-pod499cb7640afccba841314472827a81c8.slice. Sep 4 23:45:39.671664 kubelet[2835]: I0904 23:45:39.670629 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d0ebe4629871cbe86226a7b05e98693-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-142\" (UID: \"8d0ebe4629871cbe86226a7b05e98693\") " pod="kube-system/kube-scheduler-ip-172-31-17-142" Sep 4 23:45:39.671664 kubelet[2835]: I0904 23:45:39.670699 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/499cb7640afccba841314472827a81c8-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-142\" (UID: \"499cb7640afccba841314472827a81c8\") " pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:39.671664 kubelet[2835]: I0904 23:45:39.670758 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/499cb7640afccba841314472827a81c8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-142\" (UID: \"499cb7640afccba841314472827a81c8\") " pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:39.671664 kubelet[2835]: I0904 23:45:39.670809 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:39.671664 kubelet[2835]: I0904 23:45:39.670852 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:39.672129 kubelet[2835]: I0904 23:45:39.670890 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/499cb7640afccba841314472827a81c8-ca-certs\") pod \"kube-apiserver-ip-172-31-17-142\" (UID: \"499cb7640afccba841314472827a81c8\") " pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:39.672129 kubelet[2835]: I0904 23:45:39.670946 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:39.672129 kubelet[2835]: I0904 23:45:39.670982 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:39.672129 kubelet[2835]: I0904 23:45:39.671018 2835 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:39.672129 kubelet[2835]: E0904 23:45:39.671584 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-142?timeout=10s\": dial tcp 172.31.17.142:6443: connect: connection refused" interval="400ms" Sep 4 23:45:39.683012 kubelet[2835]: E0904 23:45:39.682843 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:39.684192 kubelet[2835]: I0904 23:45:39.683559 2835 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-142" Sep 4 23:45:39.685695 kubelet[2835]: E0904 23:45:39.684601 2835 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.142:6443/api/v1/nodes\": dial tcp 172.31.17.142:6443: connect: connection refused" node="ip-172-31-17-142" Sep 4 23:45:39.690934 systemd[1]: Created slice kubepods-burstable-pod8d0ebe4629871cbe86226a7b05e98693.slice - libcontainer container kubepods-burstable-pod8d0ebe4629871cbe86226a7b05e98693.slice. Sep 4 23:45:39.695824 kubelet[2835]: E0904 23:45:39.695737 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:39.700622 systemd[1]: Created slice kubepods-burstable-podcd0fa01963e616a300a91a327ead4960.slice - libcontainer container kubepods-burstable-podcd0fa01963e616a300a91a327ead4960.slice. Sep 4 23:45:39.705703 kubelet[2835]: E0904 23:45:39.705612 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:39.887830 kubelet[2835]: I0904 23:45:39.887314 2835 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-142" Sep 4 23:45:39.887830 kubelet[2835]: E0904 23:45:39.887741 2835 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.142:6443/api/v1/nodes\": dial tcp 172.31.17.142:6443: connect: connection refused" node="ip-172-31-17-142" Sep 4 23:45:39.987852 containerd[1951]: time="2025-09-04T23:45:39.987769974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-142,Uid:499cb7640afccba841314472827a81c8,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:39.997697 containerd[1951]: time="2025-09-04T23:45:39.997627308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-142,Uid:8d0ebe4629871cbe86226a7b05e98693,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:40.007609 containerd[1951]: time="2025-09-04T23:45:40.007226333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-142,Uid:cd0fa01963e616a300a91a327ead4960,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:40.073378 kubelet[2835]: E0904 23:45:40.073314 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-142?timeout=10s\": dial tcp 172.31.17.142:6443: connect: connection refused" interval="800ms" Sep 4 23:45:40.291869 kubelet[2835]: I0904 23:45:40.291405 2835 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-142" Sep 4 23:45:40.292016 kubelet[2835]: E0904 23:45:40.291908 2835 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.142:6443/api/v1/nodes\": dial tcp 172.31.17.142:6443: connect: connection refused" node="ip-172-31-17-142" Sep 4 23:45:40.524959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369350941.mount: Deactivated successfully. Sep 4 23:45:40.531577 containerd[1951]: time="2025-09-04T23:45:40.531479613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:40.535353 containerd[1951]: time="2025-09-04T23:45:40.535298385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 23:45:40.538433 containerd[1951]: time="2025-09-04T23:45:40.538254118Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:40.542356 containerd[1951]: time="2025-09-04T23:45:40.542035960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:40.543809 containerd[1951]: time="2025-09-04T23:45:40.543583786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:40.544937 containerd[1951]: time="2025-09-04T23:45:40.544867947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:40.546196 containerd[1951]: time="2025-09-04T23:45:40.545033798Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:40.552097 containerd[1951]: time="2025-09-04T23:45:40.551379065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:40.555149 containerd[1951]: time="2025-09-04T23:45:40.555105500Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.206474ms" Sep 4 23:45:40.560102 kubelet[2835]: W0904 23:45:40.559526 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:40.560102 kubelet[2835]: E0904 23:45:40.559622 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:40.562979 containerd[1951]: time="2025-09-04T23:45:40.562916159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.170208ms" Sep 4 23:45:40.564630 containerd[1951]: time="2025-09-04T23:45:40.564567368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.233653ms" Sep 4 23:45:40.783678 containerd[1951]: time="2025-09-04T23:45:40.783471735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:40.783930 containerd[1951]: time="2025-09-04T23:45:40.783601736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:40.784223 containerd[1951]: time="2025-09-04T23:45:40.783764730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:40.784580 containerd[1951]: time="2025-09-04T23:45:40.784442396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:40.792017 containerd[1951]: time="2025-09-04T23:45:40.788617387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:40.792197 containerd[1951]: time="2025-09-04T23:45:40.791945618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:40.792515 containerd[1951]: time="2025-09-04T23:45:40.791988180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:40.793484 containerd[1951]: time="2025-09-04T23:45:40.792518797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:40.794085 containerd[1951]: time="2025-09-04T23:45:40.793818446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:40.802707 containerd[1951]: time="2025-09-04T23:45:40.793760457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:40.803774 containerd[1951]: time="2025-09-04T23:45:40.803468689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:40.804423 containerd[1951]: time="2025-09-04T23:45:40.804323300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:40.831396 systemd[1]: Started cri-containerd-e69946da84e7a1c1bfcac7ea4334110b63ec0309426a83a9ab38863c4d413774.scope - libcontainer container e69946da84e7a1c1bfcac7ea4334110b63ec0309426a83a9ab38863c4d413774. Sep 4 23:45:40.836522 kubelet[2835]: W0904 23:45:40.835279 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:40.836522 kubelet[2835]: E0904 23:45:40.835404 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:40.870417 systemd[1]: Started cri-containerd-65e8a193a5671132ad9bb748953899afbc03665d512665dec799729a5500c117.scope - libcontainer container 65e8a193a5671132ad9bb748953899afbc03665d512665dec799729a5500c117. Sep 4 23:45:40.874669 kubelet[2835]: E0904 23:45:40.874582 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-142?timeout=10s\": dial tcp 172.31.17.142:6443: connect: connection refused" interval="1.6s" Sep 4 23:45:40.888422 systemd[1]: Started cri-containerd-83184b4a2cee77edc4d2cdae475d910243201e6590d659d0268dae7e678ff4e1.scope - libcontainer container 83184b4a2cee77edc4d2cdae475d910243201e6590d659d0268dae7e678ff4e1. Sep 4 23:45:40.896630 kubelet[2835]: W0904 23:45:40.896534 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-142&limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:40.897139 kubelet[2835]: E0904 23:45:40.896960 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-142&limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:40.957751 kubelet[2835]: W0904 23:45:40.957130 2835 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.142:6443: connect: connection refused Sep 4 23:45:40.957751 kubelet[2835]: E0904 23:45:40.957234 2835 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.142:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:45:41.018803 containerd[1951]: time="2025-09-04T23:45:41.018562676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-142,Uid:499cb7640afccba841314472827a81c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e69946da84e7a1c1bfcac7ea4334110b63ec0309426a83a9ab38863c4d413774\"" Sep 4 23:45:41.020152 containerd[1951]: time="2025-09-04T23:45:41.019806245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-142,Uid:8d0ebe4629871cbe86226a7b05e98693,Namespace:kube-system,Attempt:0,} returns sandbox id \"65e8a193a5671132ad9bb748953899afbc03665d512665dec799729a5500c117\"" Sep 4 23:45:41.022032 containerd[1951]: time="2025-09-04T23:45:41.021705090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-142,Uid:cd0fa01963e616a300a91a327ead4960,Namespace:kube-system,Attempt:0,} returns sandbox id \"83184b4a2cee77edc4d2cdae475d910243201e6590d659d0268dae7e678ff4e1\"" Sep 4 23:45:41.030149 containerd[1951]: time="2025-09-04T23:45:41.029465348Z" level=info msg="CreateContainer within sandbox \"65e8a193a5671132ad9bb748953899afbc03665d512665dec799729a5500c117\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:45:41.031263 containerd[1951]: time="2025-09-04T23:45:41.031212257Z" level=info msg="CreateContainer within sandbox \"e69946da84e7a1c1bfcac7ea4334110b63ec0309426a83a9ab38863c4d413774\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:45:41.045896 containerd[1951]: time="2025-09-04T23:45:41.045764190Z" level=info msg="CreateContainer within sandbox \"83184b4a2cee77edc4d2cdae475d910243201e6590d659d0268dae7e678ff4e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:45:41.053045 containerd[1951]: time="2025-09-04T23:45:41.052444172Z" level=info msg="CreateContainer within sandbox \"65e8a193a5671132ad9bb748953899afbc03665d512665dec799729a5500c117\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"026e25abb85025e9de1af2cc8d5f3388979d82aea21dd518eb7b966384e26058\"" Sep 4 23:45:41.053819 containerd[1951]: time="2025-09-04T23:45:41.053682195Z" level=info msg="StartContainer for \"026e25abb85025e9de1af2cc8d5f3388979d82aea21dd518eb7b966384e26058\"" Sep 4 23:45:41.067552 containerd[1951]: time="2025-09-04T23:45:41.067356465Z" level=info msg="CreateContainer within sandbox \"e69946da84e7a1c1bfcac7ea4334110b63ec0309426a83a9ab38863c4d413774\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a45cfb4597eb87776c470977e7951f34b5c69aec7475f29828670d877450d60\"" Sep 4 23:45:41.068901 containerd[1951]: time="2025-09-04T23:45:41.068794844Z" level=info msg="StartContainer for \"7a45cfb4597eb87776c470977e7951f34b5c69aec7475f29828670d877450d60\"" Sep 4 23:45:41.072204 containerd[1951]: time="2025-09-04T23:45:41.072149464Z" level=info msg="CreateContainer within sandbox \"83184b4a2cee77edc4d2cdae475d910243201e6590d659d0268dae7e678ff4e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eac19e0f8b9a6eaffa6aad7eb15337fca7926133cc6f9a2199e9c78f308dd806\"" Sep 4 23:45:41.073312 containerd[1951]: time="2025-09-04T23:45:41.073266442Z" level=info msg="StartContainer for \"eac19e0f8b9a6eaffa6aad7eb15337fca7926133cc6f9a2199e9c78f308dd806\"" Sep 4 23:45:41.095483 kubelet[2835]: I0904 23:45:41.095047 2835 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-142" Sep 4 23:45:41.096338 kubelet[2835]: E0904 23:45:41.096247 2835 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.142:6443/api/v1/nodes\": dial tcp 172.31.17.142:6443: connect: connection refused" node="ip-172-31-17-142" Sep 4 23:45:41.132083 systemd[1]: Started cri-containerd-026e25abb85025e9de1af2cc8d5f3388979d82aea21dd518eb7b966384e26058.scope - libcontainer container 026e25abb85025e9de1af2cc8d5f3388979d82aea21dd518eb7b966384e26058. Sep 4 23:45:41.158876 systemd[1]: Started cri-containerd-7a45cfb4597eb87776c470977e7951f34b5c69aec7475f29828670d877450d60.scope - libcontainer container 7a45cfb4597eb87776c470977e7951f34b5c69aec7475f29828670d877450d60. Sep 4 23:45:41.173369 systemd[1]: Started cri-containerd-eac19e0f8b9a6eaffa6aad7eb15337fca7926133cc6f9a2199e9c78f308dd806.scope - libcontainer container eac19e0f8b9a6eaffa6aad7eb15337fca7926133cc6f9a2199e9c78f308dd806. Sep 4 23:45:41.278222 containerd[1951]: time="2025-09-04T23:45:41.278102767Z" level=info msg="StartContainer for \"026e25abb85025e9de1af2cc8d5f3388979d82aea21dd518eb7b966384e26058\" returns successfully" Sep 4 23:45:41.294451 containerd[1951]: time="2025-09-04T23:45:41.294293230Z" level=info msg="StartContainer for \"7a45cfb4597eb87776c470977e7951f34b5c69aec7475f29828670d877450d60\" returns successfully" Sep 4 23:45:41.316810 containerd[1951]: time="2025-09-04T23:45:41.316199318Z" level=info msg="StartContainer for \"eac19e0f8b9a6eaffa6aad7eb15337fca7926133cc6f9a2199e9c78f308dd806\" returns successfully" Sep 4 23:45:41.554675 kubelet[2835]: E0904 23:45:41.554618 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:41.566951 kubelet[2835]: E0904 23:45:41.566832 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:41.569809 kubelet[2835]: E0904 23:45:41.569517 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:42.572678 kubelet[2835]: E0904 23:45:42.571705 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:42.572678 kubelet[2835]: E0904 23:45:42.572358 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:42.698997 kubelet[2835]: I0904 23:45:42.698961 2835 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-142" Sep 4 23:45:43.573995 kubelet[2835]: E0904 23:45:43.573728 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:43.576240 kubelet[2835]: E0904 23:45:43.575805 2835 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:44.441728 kubelet[2835]: I0904 23:45:44.441397 2835 apiserver.go:52] "Watching apiserver" Sep 4 23:45:44.569559 kubelet[2835]: I0904 23:45:44.569524 2835 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:45:44.739979 kubelet[2835]: E0904 23:45:44.739653 2835 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-142\" not found" node="ip-172-31-17-142" Sep 4 23:45:44.849369 kubelet[2835]: I0904 23:45:44.848957 2835 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-142" Sep 4 23:45:44.849369 kubelet[2835]: E0904 23:45:44.849023 2835 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-142\": node \"ip-172-31-17-142\" not found" Sep 4 23:45:44.873635 kubelet[2835]: I0904 23:45:44.873592 2835 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:44.969007 kubelet[2835]: E0904 23:45:44.968939 2835 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-142\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:44.969007 kubelet[2835]: I0904 23:45:44.968993 2835 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:44.979853 kubelet[2835]: E0904 23:45:44.979510 2835 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-142\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:44.979853 kubelet[2835]: I0904 23:45:44.979558 2835 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-142" Sep 4 23:45:44.987089 kubelet[2835]: E0904 23:45:44.986788 2835 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-142\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-142" Sep 4 23:45:46.661756 systemd[1]: Reload requested from client PID 3115 ('systemctl') (unit session-7.scope)... Sep 4 23:45:46.661781 systemd[1]: Reloading... Sep 4 23:45:46.877130 zram_generator::config[3166]: No configuration found. Sep 4 23:45:47.096283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:47.354184 systemd[1]: Reloading finished in 691 ms. Sep 4 23:45:47.411105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:47.425170 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:47.425678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:47.425773 systemd[1]: kubelet.service: Consumed 2.002s CPU time, 132.5M memory peak. Sep 4 23:45:47.433276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:47.774000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:47.792965 (kubelet)[3221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:47.910597 kubelet[3221]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:47.913277 kubelet[3221]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:47.913277 kubelet[3221]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:47.913869 kubelet[3221]: I0904 23:45:47.913438 3221 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:47.923328 sudo[3232]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:45:47.923990 sudo[3232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:45:47.933867 kubelet[3221]: I0904 23:45:47.933501 3221 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:45:47.933867 kubelet[3221]: I0904 23:45:47.933562 3221 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:47.936430 kubelet[3221]: I0904 23:45:47.936371 3221 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:45:47.939977 kubelet[3221]: I0904 23:45:47.939906 3221 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:45:47.952530 kubelet[3221]: I0904 23:45:47.952195 3221 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:47.964273 kubelet[3221]: E0904 23:45:47.964210 3221 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:47.964273 kubelet[3221]: I0904 23:45:47.964270 3221 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:47.974109 kubelet[3221]: I0904 23:45:47.972219 3221 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:47.974109 kubelet[3221]: I0904 23:45:47.972661 3221 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:47.974109 kubelet[3221]: I0904 23:45:47.972718 3221 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:47.974109 kubelet[3221]: I0904 23:45:47.973220 3221 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:47.974488 kubelet[3221]: I0904 23:45:47.973242 3221 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:45:47.974488 kubelet[3221]: I0904 23:45:47.973342 3221 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:47.974488 kubelet[3221]: I0904 23:45:47.973584 3221 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:45:47.974488 kubelet[3221]: I0904 23:45:47.973607 3221 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:47.974488 kubelet[3221]: I0904 23:45:47.973639 3221 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:45:47.974488 kubelet[3221]: I0904 23:45:47.973667 3221 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:47.982376 kubelet[3221]: I0904 23:45:47.982334 3221 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:47.983322 kubelet[3221]: I0904 23:45:47.983294 3221 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:45:47.984756 kubelet[3221]: I0904 23:45:47.984724 3221 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:45:47.984927 kubelet[3221]: I0904 23:45:47.984908 3221 server.go:1287] "Started kubelet" Sep 4 23:45:47.998092 kubelet[3221]: I0904 23:45:47.995554 3221 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:48.013882 kubelet[3221]: I0904 23:45:48.013030 3221 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:45:48.014311 kubelet[3221]: E0904 23:45:48.014274 3221 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-142\" not found" Sep 4 23:45:48.017093 kubelet[3221]: I0904 23:45:48.015602 3221 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:45:48.021414 kubelet[3221]: I0904 23:45:48.021368 3221 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:48.026414 kubelet[3221]: I0904 23:45:48.026250 3221 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:48.036019 kubelet[3221]: I0904 23:45:48.035919 3221 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:48.036871 kubelet[3221]: I0904 23:45:48.036807 3221 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:48.048946 kubelet[3221]: I0904 23:45:48.037712 3221 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:48.052281 kubelet[3221]: I0904 23:45:48.052024 3221 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:45:48.112272 kubelet[3221]: I0904 23:45:48.111770 3221 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:45:48.112272 kubelet[3221]: I0904 23:45:48.111963 3221 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:45:48.113501 kubelet[3221]: E0904 23:45:48.113166 3221 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:48.119479 kubelet[3221]: I0904 23:45:48.119368 3221 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:48.135465 kubelet[3221]: E0904 23:45:48.115377 3221 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-142\" not found" Sep 4 23:45:48.148251 kubelet[3221]: I0904 23:45:48.148037 3221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:48.152668 kubelet[3221]: I0904 23:45:48.152576 3221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:48.152668 kubelet[3221]: I0904 23:45:48.152629 3221 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:45:48.152668 kubelet[3221]: I0904 23:45:48.152662 3221 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:45:48.152668 kubelet[3221]: I0904 23:45:48.152677 3221 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:45:48.153171 kubelet[3221]: E0904 23:45:48.152755 3221 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:48.257120 kubelet[3221]: E0904 23:45:48.255670 3221 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:45:48.331601 kubelet[3221]: I0904 23:45:48.331481 3221 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:45:48.331979 kubelet[3221]: I0904 23:45:48.331949 3221 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:48.332161 kubelet[3221]: I0904 23:45:48.332142 3221 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:48.332540 kubelet[3221]: I0904 23:45:48.332512 3221 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:45:48.332712 kubelet[3221]: I0904 23:45:48.332671 3221 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:45:48.332806 kubelet[3221]: I0904 23:45:48.332789 3221 policy_none.go:49] "None policy: Start" Sep 4 23:45:48.332911 kubelet[3221]: I0904 23:45:48.332892 3221 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:45:48.333011 kubelet[3221]: I0904 23:45:48.332993 3221 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:48.334010 kubelet[3221]: I0904 23:45:48.333978 3221 state_mem.go:75] "Updated machine memory state" Sep 4 23:45:48.343881 kubelet[3221]: I0904 23:45:48.343845 3221 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:45:48.344379 kubelet[3221]: I0904 23:45:48.344345 3221 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:48.345666 kubelet[3221]: I0904 23:45:48.345416 3221 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:48.348340 kubelet[3221]: I0904 23:45:48.347415 3221 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:48.351226 kubelet[3221]: E0904 23:45:48.349676 3221 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:45:48.458046 kubelet[3221]: I0904 23:45:48.457546 3221 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-142" Sep 4 23:45:48.459884 kubelet[3221]: I0904 23:45:48.458812 3221 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:48.462056 kubelet[3221]: I0904 23:45:48.459690 3221 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:48.485418 kubelet[3221]: I0904 23:45:48.485366 3221 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-142" Sep 4 23:45:48.505903 kubelet[3221]: I0904 23:45:48.504263 3221 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-142" Sep 4 23:45:48.505903 kubelet[3221]: I0904 23:45:48.504385 3221 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-142" Sep 4 23:45:48.537479 kubelet[3221]: I0904 23:45:48.536770 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/499cb7640afccba841314472827a81c8-ca-certs\") pod \"kube-apiserver-ip-172-31-17-142\" (UID: \"499cb7640afccba841314472827a81c8\") " pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:48.537479 kubelet[3221]: I0904 23:45:48.536844 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:48.537479 kubelet[3221]: I0904 23:45:48.536886 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:48.537479 kubelet[3221]: I0904 23:45:48.536932 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d0ebe4629871cbe86226a7b05e98693-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-142\" (UID: \"8d0ebe4629871cbe86226a7b05e98693\") " pod="kube-system/kube-scheduler-ip-172-31-17-142" Sep 4 23:45:48.537479 kubelet[3221]: I0904 23:45:48.536969 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/499cb7640afccba841314472827a81c8-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-142\" (UID: \"499cb7640afccba841314472827a81c8\") " pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:48.537822 kubelet[3221]: I0904 23:45:48.537010 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/499cb7640afccba841314472827a81c8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-142\" (UID: \"499cb7640afccba841314472827a81c8\") " pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:48.539162 kubelet[3221]: I0904 23:45:48.539122 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:48.539686 kubelet[3221]: I0904 23:45:48.539332 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:48.539686 kubelet[3221]: I0904 23:45:48.539393 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd0fa01963e616a300a91a327ead4960-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-142\" (UID: \"cd0fa01963e616a300a91a327ead4960\") " pod="kube-system/kube-controller-manager-ip-172-31-17-142" Sep 4 23:45:48.879284 sudo[3232]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:48.980513 kubelet[3221]: I0904 23:45:48.980135 3221 apiserver.go:52] "Watching apiserver" Sep 4 23:45:49.022254 kubelet[3221]: I0904 23:45:49.022172 3221 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:45:49.233038 kubelet[3221]: I0904 23:45:49.232957 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-142" podStartSLOduration=1.23293341 podStartE2EDuration="1.23293341s" podCreationTimestamp="2025-09-04 23:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:49.217890726 +0000 UTC m=+1.411906412" watchObservedRunningTime="2025-09-04 23:45:49.23293341 +0000 UTC m=+1.426949072" Sep 4 23:45:49.251880 kubelet[3221]: I0904 23:45:49.251579 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-142" podStartSLOduration=1.251556786 podStartE2EDuration="1.251556786s" podCreationTimestamp="2025-09-04 23:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:49.234939534 +0000 UTC m=+1.428955220" watchObservedRunningTime="2025-09-04 23:45:49.251556786 +0000 UTC m=+1.445572460" Sep 4 23:45:49.266252 kubelet[3221]: I0904 23:45:49.265536 3221 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:49.267347 kubelet[3221]: I0904 23:45:49.267283 3221 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-142" Sep 4 23:45:49.277085 kubelet[3221]: E0904 23:45:49.276989 3221 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-142\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-142" Sep 4 23:45:49.280765 kubelet[3221]: E0904 23:45:49.280421 3221 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-142\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-142" Sep 4 23:45:49.298447 kubelet[3221]: I0904 23:45:49.297781 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-142" podStartSLOduration=1.297758298 podStartE2EDuration="1.297758298s" podCreationTimestamp="2025-09-04 23:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:49.253206906 +0000 UTC m=+1.447222592" watchObservedRunningTime="2025-09-04 23:45:49.297758298 +0000 UTC m=+1.491773972" Sep 4 23:45:51.902605 sudo[2278]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:51.925996 sshd[2277]: Connection closed by 139.178.89.65 port 39106 Sep 4 23:45:51.926900 sshd-session[2275]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:51.932740 systemd-logind[1928]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:45:51.933745 systemd[1]: sshd@6-172.31.17.142:22-139.178.89.65:39106.service: Deactivated successfully. Sep 4 23:45:51.939925 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:45:51.941479 systemd[1]: session-7.scope: Consumed 10.695s CPU time, 265.5M memory peak. Sep 4 23:45:51.946289 systemd-logind[1928]: Removed session 7. Sep 4 23:45:53.648999 update_engine[1929]: I20250904 23:45:53.648880 1929 update_attempter.cc:509] Updating boot flags... Sep 4 23:45:53.750387 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3308) Sep 4 23:45:54.086142 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3307) Sep 4 23:45:54.701854 kubelet[3221]: I0904 23:45:54.701797 3221 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:45:54.705140 containerd[1951]: time="2025-09-04T23:45:54.705052345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:45:54.706287 kubelet[3221]: I0904 23:45:54.706242 3221 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:45:55.482200 systemd[1]: Created slice kubepods-besteffort-pod11c8e7d9_d6f5_44b7_b931_53ca3cec82e1.slice - libcontainer container kubepods-besteffort-pod11c8e7d9_d6f5_44b7_b931_53ca3cec82e1.slice. Sep 4 23:45:55.490018 kubelet[3221]: I0904 23:45:55.489647 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11c8e7d9-d6f5-44b7-b931-53ca3cec82e1-kube-proxy\") pod \"kube-proxy-gmxxt\" (UID: \"11c8e7d9-d6f5-44b7-b931-53ca3cec82e1\") " pod="kube-system/kube-proxy-gmxxt" Sep 4 23:45:55.493347 kubelet[3221]: I0904 23:45:55.493303 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11c8e7d9-d6f5-44b7-b931-53ca3cec82e1-xtables-lock\") pod \"kube-proxy-gmxxt\" (UID: \"11c8e7d9-d6f5-44b7-b931-53ca3cec82e1\") " pod="kube-system/kube-proxy-gmxxt" Sep 4 23:45:55.494601 kubelet[3221]: I0904 23:45:55.494556 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11c8e7d9-d6f5-44b7-b931-53ca3cec82e1-lib-modules\") pod \"kube-proxy-gmxxt\" (UID: \"11c8e7d9-d6f5-44b7-b931-53ca3cec82e1\") " pod="kube-system/kube-proxy-gmxxt" Sep 4 23:45:55.494861 kubelet[3221]: I0904 23:45:55.494811 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6trx\" (UniqueName: \"kubernetes.io/projected/11c8e7d9-d6f5-44b7-b931-53ca3cec82e1-kube-api-access-d6trx\") pod \"kube-proxy-gmxxt\" (UID: \"11c8e7d9-d6f5-44b7-b931-53ca3cec82e1\") " pod="kube-system/kube-proxy-gmxxt" Sep 4 23:45:55.524436 systemd[1]: Created slice kubepods-burstable-pod715e28ad_7110_413f_a3ae_80efb70c2168.slice - libcontainer container kubepods-burstable-pod715e28ad_7110_413f_a3ae_80efb70c2168.slice. Sep 4 23:45:55.590433 systemd[1]: Created slice kubepods-besteffort-pod8805b2d1_bdf7_45a8_a336_297fc8e02399.slice - libcontainer container kubepods-besteffort-pod8805b2d1_bdf7_45a8_a336_297fc8e02399.slice. Sep 4 23:45:55.596740 kubelet[3221]: I0904 23:45:55.595234 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-xtables-lock\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.596740 kubelet[3221]: I0904 23:45:55.595297 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-kernel\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.596740 kubelet[3221]: I0904 23:45:55.595338 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-run\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.596740 kubelet[3221]: I0904 23:45:55.595376 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-net\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.596740 kubelet[3221]: I0904 23:45:55.595421 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-etc-cni-netd\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.596740 kubelet[3221]: I0904 23:45:55.595460 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-hubble-tls\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597235 kubelet[3221]: I0904 23:45:55.595497 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-cgroup\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597235 kubelet[3221]: I0904 23:45:55.595550 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cni-path\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597235 kubelet[3221]: I0904 23:45:55.595585 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-config-path\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597235 kubelet[3221]: I0904 23:45:55.595622 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftnmx\" (UniqueName: \"kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-kube-api-access-ftnmx\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597235 kubelet[3221]: I0904 23:45:55.595659 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-bpf-maps\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597235 kubelet[3221]: I0904 23:45:55.595695 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-hostproc\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597548 kubelet[3221]: I0904 23:45:55.595728 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-lib-modules\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.597548 kubelet[3221]: I0904 23:45:55.595806 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/715e28ad-7110-413f-a3ae-80efb70c2168-clustermesh-secrets\") pod \"cilium-559f9\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " pod="kube-system/cilium-559f9" Sep 4 23:45:55.698104 kubelet[3221]: I0904 23:45:55.696136 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2g9m\" (UniqueName: \"kubernetes.io/projected/8805b2d1-bdf7-45a8-a336-297fc8e02399-kube-api-access-c2g9m\") pod \"cilium-operator-6c4d7847fc-s8c28\" (UID: \"8805b2d1-bdf7-45a8-a336-297fc8e02399\") " pod="kube-system/cilium-operator-6c4d7847fc-s8c28" Sep 4 23:45:55.698104 kubelet[3221]: I0904 23:45:55.696391 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8805b2d1-bdf7-45a8-a336-297fc8e02399-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s8c28\" (UID: \"8805b2d1-bdf7-45a8-a336-297fc8e02399\") " pod="kube-system/cilium-operator-6c4d7847fc-s8c28" Sep 4 23:45:55.811515 containerd[1951]: time="2025-09-04T23:45:55.810447122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmxxt,Uid:11c8e7d9-d6f5-44b7-b931-53ca3cec82e1,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:55.833110 containerd[1951]: time="2025-09-04T23:45:55.832449014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-559f9,Uid:715e28ad-7110-413f-a3ae-80efb70c2168,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:55.856846 containerd[1951]: time="2025-09-04T23:45:55.856719411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:55.857186 containerd[1951]: time="2025-09-04T23:45:55.857118951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:55.857562 containerd[1951]: time="2025-09-04T23:45:55.857507559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:55.858908 containerd[1951]: time="2025-09-04T23:45:55.858735039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:55.885397 containerd[1951]: time="2025-09-04T23:45:55.884052939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:55.885397 containerd[1951]: time="2025-09-04T23:45:55.885138375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:55.885397 containerd[1951]: time="2025-09-04T23:45:55.885166083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:55.885397 containerd[1951]: time="2025-09-04T23:45:55.885322599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:55.907741 containerd[1951]: time="2025-09-04T23:45:55.907268367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s8c28,Uid:8805b2d1-bdf7-45a8-a336-297fc8e02399,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:55.911424 systemd[1]: Started cri-containerd-1b1d051d8d2c4a8a116e04687574f2c89fd688c4b1177cf2171d80f302bbff20.scope - libcontainer container 1b1d051d8d2c4a8a116e04687574f2c89fd688c4b1177cf2171d80f302bbff20. Sep 4 23:45:55.935395 systemd[1]: Started cri-containerd-7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f.scope - libcontainer container 7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f. Sep 4 23:45:55.979746 containerd[1951]: time="2025-09-04T23:45:55.975623187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:55.979746 containerd[1951]: time="2025-09-04T23:45:55.975800727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:55.979746 containerd[1951]: time="2025-09-04T23:45:55.975841299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:55.979746 containerd[1951]: time="2025-09-04T23:45:55.977158419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:56.020767 containerd[1951]: time="2025-09-04T23:45:56.020400551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmxxt,Uid:11c8e7d9-d6f5-44b7-b931-53ca3cec82e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b1d051d8d2c4a8a116e04687574f2c89fd688c4b1177cf2171d80f302bbff20\"" Sep 4 23:45:56.029451 containerd[1951]: time="2025-09-04T23:45:56.029391191Z" level=info msg="CreateContainer within sandbox \"1b1d051d8d2c4a8a116e04687574f2c89fd688c4b1177cf2171d80f302bbff20\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:45:56.046486 systemd[1]: Started cri-containerd-6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391.scope - libcontainer container 6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391. Sep 4 23:45:56.053622 containerd[1951]: time="2025-09-04T23:45:56.052485131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-559f9,Uid:715e28ad-7110-413f-a3ae-80efb70c2168,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\"" Sep 4 23:45:56.065136 containerd[1951]: time="2025-09-04T23:45:56.061560492Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:45:56.082669 containerd[1951]: time="2025-09-04T23:45:56.082590012Z" level=info msg="CreateContainer within sandbox \"1b1d051d8d2c4a8a116e04687574f2c89fd688c4b1177cf2171d80f302bbff20\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46670cb0a2cfcc27eb772987debe1295295b43f63686660193d85cd349051903\"" Sep 4 23:45:56.086362 containerd[1951]: time="2025-09-04T23:45:56.085661196Z" level=info msg="StartContainer for \"46670cb0a2cfcc27eb772987debe1295295b43f63686660193d85cd349051903\"" Sep 4 23:45:56.141904 containerd[1951]: time="2025-09-04T23:45:56.141838188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s8c28,Uid:8805b2d1-bdf7-45a8-a336-297fc8e02399,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\"" Sep 4 23:45:56.171364 systemd[1]: Started cri-containerd-46670cb0a2cfcc27eb772987debe1295295b43f63686660193d85cd349051903.scope - libcontainer container 46670cb0a2cfcc27eb772987debe1295295b43f63686660193d85cd349051903. Sep 4 23:45:56.239098 containerd[1951]: time="2025-09-04T23:45:56.237104268Z" level=info msg="StartContainer for \"46670cb0a2cfcc27eb772987debe1295295b43f63686660193d85cd349051903\" returns successfully" Sep 4 23:45:59.213478 kubelet[3221]: I0904 23:45:59.213324 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gmxxt" podStartSLOduration=4.213299391 podStartE2EDuration="4.213299391s" podCreationTimestamp="2025-09-04 23:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:56.316681489 +0000 UTC m=+8.510697175" watchObservedRunningTime="2025-09-04 23:45:59.213299391 +0000 UTC m=+11.407315077" Sep 4 23:46:04.828196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725603406.mount: Deactivated successfully. Sep 4 23:46:07.599642 containerd[1951]: time="2025-09-04T23:46:07.599557573Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:07.601872 containerd[1951]: time="2025-09-04T23:46:07.601462465Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:46:07.606097 containerd[1951]: time="2025-09-04T23:46:07.604096693Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:07.607605 containerd[1951]: time="2025-09-04T23:46:07.607541497Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.545913817s" Sep 4 23:46:07.607722 containerd[1951]: time="2025-09-04T23:46:07.607604257Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:46:07.611502 containerd[1951]: time="2025-09-04T23:46:07.611438269Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:46:07.613573 containerd[1951]: time="2025-09-04T23:46:07.613522297Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:46:07.643174 containerd[1951]: time="2025-09-04T23:46:07.643118617Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\"" Sep 4 23:46:07.644594 containerd[1951]: time="2025-09-04T23:46:07.644518921Z" level=info msg="StartContainer for \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\"" Sep 4 23:46:07.705387 systemd[1]: Started cri-containerd-dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0.scope - libcontainer container dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0. Sep 4 23:46:07.755382 containerd[1951]: time="2025-09-04T23:46:07.755304266Z" level=info msg="StartContainer for \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\" returns successfully" Sep 4 23:46:07.784703 systemd[1]: cri-containerd-dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0.scope: Deactivated successfully. Sep 4 23:46:08.504127 containerd[1951]: time="2025-09-04T23:46:08.504004897Z" level=info msg="shim disconnected" id=dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0 namespace=k8s.io Sep 4 23:46:08.504127 containerd[1951]: time="2025-09-04T23:46:08.504094069Z" level=warning msg="cleaning up after shim disconnected" id=dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0 namespace=k8s.io Sep 4 23:46:08.504435 containerd[1951]: time="2025-09-04T23:46:08.504149233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:08.633351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0-rootfs.mount: Deactivated successfully. Sep 4 23:46:09.309033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845196531.mount: Deactivated successfully. Sep 4 23:46:09.353295 containerd[1951]: time="2025-09-04T23:46:09.353209658Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:09.441107 containerd[1951]: time="2025-09-04T23:46:09.437389922Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\"" Sep 4 23:46:09.446114 containerd[1951]: time="2025-09-04T23:46:09.444776258Z" level=info msg="StartContainer for \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\"" Sep 4 23:46:09.544393 systemd[1]: Started cri-containerd-2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb.scope - libcontainer container 2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb. Sep 4 23:46:09.627342 containerd[1951]: time="2025-09-04T23:46:09.626604699Z" level=info msg="StartContainer for \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\" returns successfully" Sep 4 23:46:09.674406 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:09.675099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:09.675476 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:09.685804 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:09.693303 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:46:09.695377 systemd[1]: cri-containerd-2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb.scope: Deactivated successfully. Sep 4 23:46:09.748959 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:09.777924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb-rootfs.mount: Deactivated successfully. Sep 4 23:46:09.809491 containerd[1951]: time="2025-09-04T23:46:09.809302948Z" level=info msg="shim disconnected" id=2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb namespace=k8s.io Sep 4 23:46:09.809491 containerd[1951]: time="2025-09-04T23:46:09.809380528Z" level=warning msg="cleaning up after shim disconnected" id=2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb namespace=k8s.io Sep 4 23:46:09.809491 containerd[1951]: time="2025-09-04T23:46:09.809404552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:09.842129 containerd[1951]: time="2025-09-04T23:46:09.841792696Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:46:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:46:10.266809 containerd[1951]: time="2025-09-04T23:46:10.266726462Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:10.269584 containerd[1951]: time="2025-09-04T23:46:10.269500874Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:46:10.273018 containerd[1951]: time="2025-09-04T23:46:10.272945882Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:10.278191 containerd[1951]: time="2025-09-04T23:46:10.277980506Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.666478265s" Sep 4 23:46:10.278191 containerd[1951]: time="2025-09-04T23:46:10.278038142Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:46:10.282406 containerd[1951]: time="2025-09-04T23:46:10.281621006Z" level=info msg="CreateContainer within sandbox \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:10.318573 containerd[1951]: time="2025-09-04T23:46:10.318506714Z" level=info msg="CreateContainer within sandbox \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\"" Sep 4 23:46:10.320112 containerd[1951]: time="2025-09-04T23:46:10.319173962Z" level=info msg="StartContainer for \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\"" Sep 4 23:46:10.375757 containerd[1951]: time="2025-09-04T23:46:10.375140175Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:10.379236 systemd[1]: Started cri-containerd-8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d.scope - libcontainer container 8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d. Sep 4 23:46:10.421951 containerd[1951]: time="2025-09-04T23:46:10.421871091Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\"" Sep 4 23:46:10.423099 containerd[1951]: time="2025-09-04T23:46:10.422987451Z" level=info msg="StartContainer for \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\"" Sep 4 23:46:10.487157 containerd[1951]: time="2025-09-04T23:46:10.486405147Z" level=info msg="StartContainer for \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\" returns successfully" Sep 4 23:46:10.493057 systemd[1]: Started cri-containerd-a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395.scope - libcontainer container a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395. Sep 4 23:46:10.570465 containerd[1951]: time="2025-09-04T23:46:10.570323752Z" level=info msg="StartContainer for \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\" returns successfully" Sep 4 23:46:10.582014 systemd[1]: cri-containerd-a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395.scope: Deactivated successfully. Sep 4 23:46:10.731725 containerd[1951]: time="2025-09-04T23:46:10.731589736Z" level=info msg="shim disconnected" id=a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395 namespace=k8s.io Sep 4 23:46:10.731725 containerd[1951]: time="2025-09-04T23:46:10.731666536Z" level=warning msg="cleaning up after shim disconnected" id=a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395 namespace=k8s.io Sep 4 23:46:10.731725 containerd[1951]: time="2025-09-04T23:46:10.731691112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:11.383105 containerd[1951]: time="2025-09-04T23:46:11.381962236Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:11.411991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168878726.mount: Deactivated successfully. Sep 4 23:46:11.416863 containerd[1951]: time="2025-09-04T23:46:11.412536868Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\"" Sep 4 23:46:11.416863 containerd[1951]: time="2025-09-04T23:46:11.415100344Z" level=info msg="StartContainer for \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\"" Sep 4 23:46:11.515409 systemd[1]: Started cri-containerd-13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e.scope - libcontainer container 13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e. Sep 4 23:46:11.631853 containerd[1951]: time="2025-09-04T23:46:11.631783325Z" level=info msg="StartContainer for \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\" returns successfully" Sep 4 23:46:11.637261 systemd[1]: cri-containerd-13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e.scope: Deactivated successfully. Sep 4 23:46:11.707323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e-rootfs.mount: Deactivated successfully. Sep 4 23:46:11.716445 containerd[1951]: time="2025-09-04T23:46:11.716357273Z" level=info msg="shim disconnected" id=13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e namespace=k8s.io Sep 4 23:46:11.717042 containerd[1951]: time="2025-09-04T23:46:11.716766089Z" level=warning msg="cleaning up after shim disconnected" id=13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e namespace=k8s.io Sep 4 23:46:11.717042 containerd[1951]: time="2025-09-04T23:46:11.716798333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:11.764755 kubelet[3221]: I0904 23:46:11.764296 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s8c28" podStartSLOduration=2.631944532 podStartE2EDuration="16.764272974s" podCreationTimestamp="2025-09-04 23:45:55 +0000 UTC" firstStartedPulling="2025-09-04 23:45:56.146611248 +0000 UTC m=+8.340626898" lastFinishedPulling="2025-09-04 23:46:10.27893969 +0000 UTC m=+22.472955340" observedRunningTime="2025-09-04 23:46:11.582090233 +0000 UTC m=+23.776105907" watchObservedRunningTime="2025-09-04 23:46:11.764272974 +0000 UTC m=+23.958288648" Sep 4 23:46:12.413577 containerd[1951]: time="2025-09-04T23:46:12.411361049Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:12.460485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343678562.mount: Deactivated successfully. Sep 4 23:46:12.462620 containerd[1951]: time="2025-09-04T23:46:12.462575777Z" level=info msg="CreateContainer within sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\"" Sep 4 23:46:12.464111 containerd[1951]: time="2025-09-04T23:46:12.463942313Z" level=info msg="StartContainer for \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\"" Sep 4 23:46:12.557040 systemd[1]: Started cri-containerd-857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f.scope - libcontainer container 857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f. Sep 4 23:46:12.617254 containerd[1951]: time="2025-09-04T23:46:12.616532454Z" level=info msg="StartContainer for \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\" returns successfully" Sep 4 23:46:12.774392 kubelet[3221]: I0904 23:46:12.773980 3221 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:46:12.862464 systemd[1]: Created slice kubepods-burstable-podc1541a0a_d772_448b_b3e1_3bee7042bfa8.slice - libcontainer container kubepods-burstable-podc1541a0a_d772_448b_b3e1_3bee7042bfa8.slice. Sep 4 23:46:12.878828 systemd[1]: Created slice kubepods-burstable-poda54789a2_b0bc_4139_acee_7ca583559e46.slice - libcontainer container kubepods-burstable-poda54789a2_b0bc_4139_acee_7ca583559e46.slice. Sep 4 23:46:12.933357 kubelet[3221]: I0904 23:46:12.933227 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfl2f\" (UniqueName: \"kubernetes.io/projected/a54789a2-b0bc-4139-acee-7ca583559e46-kube-api-access-sfl2f\") pod \"coredns-668d6bf9bc-86rml\" (UID: \"a54789a2-b0bc-4139-acee-7ca583559e46\") " pod="kube-system/coredns-668d6bf9bc-86rml" Sep 4 23:46:12.933603 kubelet[3221]: I0904 23:46:12.933313 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1541a0a-d772-448b-b3e1-3bee7042bfa8-config-volume\") pod \"coredns-668d6bf9bc-zlbnm\" (UID: \"c1541a0a-d772-448b-b3e1-3bee7042bfa8\") " pod="kube-system/coredns-668d6bf9bc-zlbnm" Sep 4 23:46:12.933685 kubelet[3221]: I0904 23:46:12.933631 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a54789a2-b0bc-4139-acee-7ca583559e46-config-volume\") pod \"coredns-668d6bf9bc-86rml\" (UID: \"a54789a2-b0bc-4139-acee-7ca583559e46\") " pod="kube-system/coredns-668d6bf9bc-86rml" Sep 4 23:46:12.934108 kubelet[3221]: I0904 23:46:12.933770 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tbf\" (UniqueName: \"kubernetes.io/projected/c1541a0a-d772-448b-b3e1-3bee7042bfa8-kube-api-access-26tbf\") pod \"coredns-668d6bf9bc-zlbnm\" (UID: \"c1541a0a-d772-448b-b3e1-3bee7042bfa8\") " pod="kube-system/coredns-668d6bf9bc-zlbnm" Sep 4 23:46:13.172915 containerd[1951]: time="2025-09-04T23:46:13.172683509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zlbnm,Uid:c1541a0a-d772-448b-b3e1-3bee7042bfa8,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:13.186413 containerd[1951]: time="2025-09-04T23:46:13.186112289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86rml,Uid:a54789a2-b0bc-4139-acee-7ca583559e46,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:15.724321 (udev-worker)[4197]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:15.725261 (udev-worker)[4231]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:15.730056 systemd-networkd[1864]: cilium_host: Link UP Sep 4 23:46:15.730869 systemd-networkd[1864]: cilium_net: Link UP Sep 4 23:46:15.731708 systemd-networkd[1864]: cilium_net: Gained carrier Sep 4 23:46:15.732038 systemd-networkd[1864]: cilium_host: Gained carrier Sep 4 23:46:15.914743 systemd-networkd[1864]: cilium_vxlan: Link UP Sep 4 23:46:15.914766 systemd-networkd[1864]: cilium_vxlan: Gained carrier Sep 4 23:46:16.287357 systemd-networkd[1864]: cilium_host: Gained IPv6LL Sep 4 23:46:16.474196 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:16.480384 systemd-networkd[1864]: cilium_net: Gained IPv6LL Sep 4 23:46:17.568362 systemd-networkd[1864]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:17.823946 (udev-worker)[4237]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:17.830155 systemd-networkd[1864]: lxc_health: Link UP Sep 4 23:46:17.856535 systemd-networkd[1864]: lxc_health: Gained carrier Sep 4 23:46:17.911242 kubelet[3221]: I0904 23:46:17.911124 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-559f9" podStartSLOduration=11.360160103 podStartE2EDuration="22.911098524s" podCreationTimestamp="2025-09-04 23:45:55 +0000 UTC" firstStartedPulling="2025-09-04 23:45:56.058381008 +0000 UTC m=+8.252396670" lastFinishedPulling="2025-09-04 23:46:07.609319429 +0000 UTC m=+19.803335091" observedRunningTime="2025-09-04 23:46:13.445868478 +0000 UTC m=+25.639884176" watchObservedRunningTime="2025-09-04 23:46:17.911098524 +0000 UTC m=+30.105114210" Sep 4 23:46:18.299682 systemd-networkd[1864]: lxc36634db2cee8: Link UP Sep 4 23:46:18.306705 kernel: eth0: renamed from tmpf45a9 Sep 4 23:46:18.317046 systemd-networkd[1864]: lxc36634db2cee8: Gained carrier Sep 4 23:46:18.322089 systemd-networkd[1864]: lxca843171d207c: Link UP Sep 4 23:46:18.334131 (udev-worker)[4236]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:18.340978 kernel: eth0: renamed from tmp46ce1 Sep 4 23:46:18.347503 systemd-networkd[1864]: lxca843171d207c: Gained carrier Sep 4 23:46:19.807319 systemd-networkd[1864]: lxc36634db2cee8: Gained IPv6LL Sep 4 23:46:19.935420 systemd-networkd[1864]: lxc_health: Gained IPv6LL Sep 4 23:46:19.936499 systemd-networkd[1864]: lxca843171d207c: Gained IPv6LL Sep 4 23:46:22.517881 ntpd[1923]: Listen normally on 7 cilium_host 192.168.0.216:123 Sep 4 23:46:22.518021 ntpd[1923]: Listen normally on 8 cilium_net [fe80::24b1:75ff:fe4f:451d%4]:123 Sep 4 23:46:22.518510 ntpd[1923]: 4 Sep 23:46:22 ntpd[1923]: Listen normally on 7 cilium_host 192.168.0.216:123 Sep 4 23:46:22.518510 ntpd[1923]: 4 Sep 23:46:22 ntpd[1923]: Listen normally on 8 cilium_net [fe80::24b1:75ff:fe4f:451d%4]:123 Sep 4 23:46:22.518510 ntpd[1923]: 4 Sep 23:46:22 ntpd[1923]: Listen normally on 9 cilium_host [fe80::88dc:8dff:fed5:5768%5]:123 Sep 4 23:46:22.518510 ntpd[1923]: 4 Sep 23:46:22 ntpd[1923]: Listen normally on 10 cilium_vxlan [fe80::78db:87ff:fe6d:b029%6]:123 Sep 4 23:46:22.518510 ntpd[1923]: 4 Sep 23:46:22 ntpd[1923]: Listen normally on 11 lxc_health [fe80::48ef:c9ff:fea3:92df%8]:123 Sep 4 23:46:22.518180 ntpd[1923]: Listen normally on 9 cilium_host [fe80::88dc:8dff:fed5:5768%5]:123 Sep 4 23:46:22.518810 ntpd[1923]: 4 Sep 23:46:22 ntpd[1923]: Listen normally on 12 lxc36634db2cee8 [fe80::4005:c4ff:fe25:af41%10]:123 Sep 4 23:46:22.518810 ntpd[1923]: 4 Sep 23:46:22 ntpd[1923]: Listen normally on 13 lxca843171d207c [fe80::a860:6fff:fee3:c94f%12]:123 Sep 4 23:46:22.518303 ntpd[1923]: Listen normally on 10 cilium_vxlan [fe80::78db:87ff:fe6d:b029%6]:123 Sep 4 23:46:22.518375 ntpd[1923]: Listen normally on 11 lxc_health [fe80::48ef:c9ff:fea3:92df%8]:123 Sep 4 23:46:22.518536 ntpd[1923]: Listen normally on 12 lxc36634db2cee8 [fe80::4005:c4ff:fe25:af41%10]:123 Sep 4 23:46:22.518612 ntpd[1923]: Listen normally on 13 lxca843171d207c [fe80::a860:6fff:fee3:c94f%12]:123 Sep 4 23:46:25.381702 systemd[1]: Started sshd@7-172.31.17.142:22-139.178.89.65:59056.service - OpenSSH per-connection server daemon (139.178.89.65:59056). Sep 4 23:46:25.580498 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 59056 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:25.583334 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:25.592740 systemd-logind[1928]: New session 8 of user core. Sep 4 23:46:25.603201 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:46:25.957445 sshd[4606]: Connection closed by 139.178.89.65 port 59056 Sep 4 23:46:25.958793 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:25.967825 systemd[1]: sshd@7-172.31.17.142:22-139.178.89.65:59056.service: Deactivated successfully. Sep 4 23:46:25.968421 systemd-logind[1928]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:46:25.974891 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:46:25.985122 systemd-logind[1928]: Removed session 8. Sep 4 23:46:26.721191 containerd[1951]: time="2025-09-04T23:46:26.720452636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:26.721191 containerd[1951]: time="2025-09-04T23:46:26.720550100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:26.721191 containerd[1951]: time="2025-09-04T23:46:26.720577952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.722444 containerd[1951]: time="2025-09-04T23:46:26.722006420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.780913 systemd[1]: Started cri-containerd-f45a9abf05c121d967c6ced216553c07500874d4c8e1efd667a9ca836256ff92.scope - libcontainer container f45a9abf05c121d967c6ced216553c07500874d4c8e1efd667a9ca836256ff92. Sep 4 23:46:26.878934 containerd[1951]: time="2025-09-04T23:46:26.878709093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:26.881086 containerd[1951]: time="2025-09-04T23:46:26.879034341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:26.881086 containerd[1951]: time="2025-09-04T23:46:26.879115449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.881086 containerd[1951]: time="2025-09-04T23:46:26.879270897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:26.956376 systemd[1]: run-containerd-runc-k8s.io-46ce106ef1447f0c1edb50c45fb761592128964331a31ee3352efc3a787376c3-runc.n34lGc.mount: Deactivated successfully. Sep 4 23:46:26.967444 containerd[1951]: time="2025-09-04T23:46:26.967360641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zlbnm,Uid:c1541a0a-d772-448b-b3e1-3bee7042bfa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f45a9abf05c121d967c6ced216553c07500874d4c8e1efd667a9ca836256ff92\"" Sep 4 23:46:26.977415 systemd[1]: Started cri-containerd-46ce106ef1447f0c1edb50c45fb761592128964331a31ee3352efc3a787376c3.scope - libcontainer container 46ce106ef1447f0c1edb50c45fb761592128964331a31ee3352efc3a787376c3. Sep 4 23:46:26.988414 containerd[1951]: time="2025-09-04T23:46:26.988316721Z" level=info msg="CreateContainer within sandbox \"f45a9abf05c121d967c6ced216553c07500874d4c8e1efd667a9ca836256ff92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:27.016357 containerd[1951]: time="2025-09-04T23:46:27.016262609Z" level=info msg="CreateContainer within sandbox \"f45a9abf05c121d967c6ced216553c07500874d4c8e1efd667a9ca836256ff92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b26da2626979ffc8e078c8ec100a4e0c15361cce873dd10b39c26faf775f267\"" Sep 4 23:46:27.018449 containerd[1951]: time="2025-09-04T23:46:27.018270233Z" level=info msg="StartContainer for \"8b26da2626979ffc8e078c8ec100a4e0c15361cce873dd10b39c26faf775f267\"" Sep 4 23:46:27.100399 systemd[1]: Started cri-containerd-8b26da2626979ffc8e078c8ec100a4e0c15361cce873dd10b39c26faf775f267.scope - libcontainer container 8b26da2626979ffc8e078c8ec100a4e0c15361cce873dd10b39c26faf775f267. Sep 4 23:46:27.110812 containerd[1951]: time="2025-09-04T23:46:27.110528262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86rml,Uid:a54789a2-b0bc-4139-acee-7ca583559e46,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ce106ef1447f0c1edb50c45fb761592128964331a31ee3352efc3a787376c3\"" Sep 4 23:46:27.120563 containerd[1951]: time="2025-09-04T23:46:27.120493950Z" level=info msg="CreateContainer within sandbox \"46ce106ef1447f0c1edb50c45fb761592128964331a31ee3352efc3a787376c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:27.148929 containerd[1951]: time="2025-09-04T23:46:27.148741578Z" level=info msg="CreateContainer within sandbox \"46ce106ef1447f0c1edb50c45fb761592128964331a31ee3352efc3a787376c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f194507ea4f5caf4d6a8cc724088f2d70db49b0cc68f8afd88a4845fc1a4556f\"" Sep 4 23:46:27.151307 containerd[1951]: time="2025-09-04T23:46:27.150797718Z" level=info msg="StartContainer for \"f194507ea4f5caf4d6a8cc724088f2d70db49b0cc68f8afd88a4845fc1a4556f\"" Sep 4 23:46:27.222141 containerd[1951]: time="2025-09-04T23:46:27.221628954Z" level=info msg="StartContainer for \"8b26da2626979ffc8e078c8ec100a4e0c15361cce873dd10b39c26faf775f267\" returns successfully" Sep 4 23:46:27.244458 systemd[1]: Started cri-containerd-f194507ea4f5caf4d6a8cc724088f2d70db49b0cc68f8afd88a4845fc1a4556f.scope - libcontainer container f194507ea4f5caf4d6a8cc724088f2d70db49b0cc68f8afd88a4845fc1a4556f. Sep 4 23:46:27.345139 containerd[1951]: time="2025-09-04T23:46:27.345084607Z" level=info msg="StartContainer for \"f194507ea4f5caf4d6a8cc724088f2d70db49b0cc68f8afd88a4845fc1a4556f\" returns successfully" Sep 4 23:46:27.500567 kubelet[3221]: I0904 23:46:27.500216 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-86rml" podStartSLOduration=32.500159792 podStartE2EDuration="32.500159792s" podCreationTimestamp="2025-09-04 23:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:27.498802724 +0000 UTC m=+39.692818386" watchObservedRunningTime="2025-09-04 23:46:27.500159792 +0000 UTC m=+39.694175514" Sep 4 23:46:27.526671 kubelet[3221]: I0904 23:46:27.525784 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zlbnm" podStartSLOduration=32.525759608 podStartE2EDuration="32.525759608s" podCreationTimestamp="2025-09-04 23:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:27.523283456 +0000 UTC m=+39.717299130" watchObservedRunningTime="2025-09-04 23:46:27.525759608 +0000 UTC m=+39.719775270" Sep 4 23:46:31.000640 systemd[1]: Started sshd@8-172.31.17.142:22-139.178.89.65:44746.service - OpenSSH per-connection server daemon (139.178.89.65:44746). Sep 4 23:46:31.192918 sshd[4800]: Accepted publickey for core from 139.178.89.65 port 44746 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:31.195593 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:31.204913 systemd-logind[1928]: New session 9 of user core. Sep 4 23:46:31.216366 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:46:31.460834 sshd[4802]: Connection closed by 139.178.89.65 port 44746 Sep 4 23:46:31.461714 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:31.469588 systemd[1]: sshd@8-172.31.17.142:22-139.178.89.65:44746.service: Deactivated successfully. Sep 4 23:46:31.474821 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:46:31.477946 systemd-logind[1928]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:46:31.479922 systemd-logind[1928]: Removed session 9. Sep 4 23:46:36.512588 systemd[1]: Started sshd@9-172.31.17.142:22-139.178.89.65:44762.service - OpenSSH per-connection server daemon (139.178.89.65:44762). Sep 4 23:46:36.692676 sshd[4815]: Accepted publickey for core from 139.178.89.65 port 44762 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:36.695319 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:36.703247 systemd-logind[1928]: New session 10 of user core. Sep 4 23:46:36.712340 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:46:36.961329 sshd[4817]: Connection closed by 139.178.89.65 port 44762 Sep 4 23:46:36.962384 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:36.970251 systemd[1]: sshd@9-172.31.17.142:22-139.178.89.65:44762.service: Deactivated successfully. Sep 4 23:46:36.973764 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:46:36.976399 systemd-logind[1928]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:46:36.978255 systemd-logind[1928]: Removed session 10. Sep 4 23:46:42.006834 systemd[1]: Started sshd@10-172.31.17.142:22-139.178.89.65:34412.service - OpenSSH per-connection server daemon (139.178.89.65:34412). Sep 4 23:46:42.201024 sshd[4830]: Accepted publickey for core from 139.178.89.65 port 34412 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:42.203492 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:42.212738 systemd-logind[1928]: New session 11 of user core. Sep 4 23:46:42.220380 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:46:42.460644 sshd[4832]: Connection closed by 139.178.89.65 port 34412 Sep 4 23:46:42.461514 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:42.468240 systemd-logind[1928]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:46:42.469585 systemd[1]: sshd@10-172.31.17.142:22-139.178.89.65:34412.service: Deactivated successfully. Sep 4 23:46:42.472969 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:46:42.476271 systemd-logind[1928]: Removed session 11. Sep 4 23:46:47.506604 systemd[1]: Started sshd@11-172.31.17.142:22-139.178.89.65:34414.service - OpenSSH per-connection server daemon (139.178.89.65:34414). Sep 4 23:46:47.696129 sshd[4844]: Accepted publickey for core from 139.178.89.65 port 34414 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:47.699130 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:47.708053 systemd-logind[1928]: New session 12 of user core. Sep 4 23:46:47.713342 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:46:47.959134 sshd[4846]: Connection closed by 139.178.89.65 port 34414 Sep 4 23:46:47.958104 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:47.965014 systemd[1]: sshd@11-172.31.17.142:22-139.178.89.65:34414.service: Deactivated successfully. Sep 4 23:46:47.968506 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:46:47.970452 systemd-logind[1928]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:46:47.973702 systemd-logind[1928]: Removed session 12. Sep 4 23:46:48.000619 systemd[1]: Started sshd@12-172.31.17.142:22-139.178.89.65:34418.service - OpenSSH per-connection server daemon (139.178.89.65:34418). Sep 4 23:46:48.182005 sshd[4858]: Accepted publickey for core from 139.178.89.65 port 34418 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:48.185228 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:48.194349 systemd-logind[1928]: New session 13 of user core. Sep 4 23:46:48.202334 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:46:48.515202 sshd[4861]: Connection closed by 139.178.89.65 port 34418 Sep 4 23:46:48.519268 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:48.527935 systemd[1]: sshd@12-172.31.17.142:22-139.178.89.65:34418.service: Deactivated successfully. Sep 4 23:46:48.534849 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:46:48.540950 systemd-logind[1928]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:46:48.562788 systemd[1]: Started sshd@13-172.31.17.142:22-139.178.89.65:34422.service - OpenSSH per-connection server daemon (139.178.89.65:34422). Sep 4 23:46:48.567783 systemd-logind[1928]: Removed session 13. Sep 4 23:46:48.787621 sshd[4871]: Accepted publickey for core from 139.178.89.65 port 34422 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:48.790217 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:48.799404 systemd-logind[1928]: New session 14 of user core. Sep 4 23:46:48.811329 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:46:49.070199 sshd[4874]: Connection closed by 139.178.89.65 port 34422 Sep 4 23:46:49.071040 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:49.079691 systemd[1]: sshd@13-172.31.17.142:22-139.178.89.65:34422.service: Deactivated successfully. Sep 4 23:46:49.086025 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:46:49.089647 systemd-logind[1928]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:46:49.094947 systemd-logind[1928]: Removed session 14. Sep 4 23:46:54.115576 systemd[1]: Started sshd@14-172.31.17.142:22-139.178.89.65:48128.service - OpenSSH per-connection server daemon (139.178.89.65:48128). Sep 4 23:46:54.299487 sshd[4885]: Accepted publickey for core from 139.178.89.65 port 48128 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:54.301955 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:54.310795 systemd-logind[1928]: New session 15 of user core. Sep 4 23:46:54.318359 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:46:54.571290 sshd[4887]: Connection closed by 139.178.89.65 port 48128 Sep 4 23:46:54.571168 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:54.577766 systemd[1]: sshd@14-172.31.17.142:22-139.178.89.65:48128.service: Deactivated successfully. Sep 4 23:46:54.582373 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:46:54.584902 systemd-logind[1928]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:46:54.587130 systemd-logind[1928]: Removed session 15. Sep 4 23:46:59.614608 systemd[1]: Started sshd@15-172.31.17.142:22-139.178.89.65:48142.service - OpenSSH per-connection server daemon (139.178.89.65:48142). Sep 4 23:46:59.796956 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 48142 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:59.799466 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:59.807381 systemd-logind[1928]: New session 16 of user core. Sep 4 23:46:59.819327 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:47:00.073599 sshd[4904]: Connection closed by 139.178.89.65 port 48142 Sep 4 23:47:00.074470 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:00.081420 systemd[1]: sshd@15-172.31.17.142:22-139.178.89.65:48142.service: Deactivated successfully. Sep 4 23:47:00.085936 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:47:00.088191 systemd-logind[1928]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:47:00.089912 systemd-logind[1928]: Removed session 16. Sep 4 23:47:05.115587 systemd[1]: Started sshd@16-172.31.17.142:22-139.178.89.65:56108.service - OpenSSH per-connection server daemon (139.178.89.65:56108). Sep 4 23:47:05.299528 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 56108 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:05.302141 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:05.311096 systemd-logind[1928]: New session 17 of user core. Sep 4 23:47:05.324393 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:47:05.573781 sshd[4918]: Connection closed by 139.178.89.65 port 56108 Sep 4 23:47:05.572222 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:05.580371 systemd[1]: sshd@16-172.31.17.142:22-139.178.89.65:56108.service: Deactivated successfully. Sep 4 23:47:05.584771 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:47:05.587221 systemd-logind[1928]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:47:05.589876 systemd-logind[1928]: Removed session 17. Sep 4 23:47:05.616657 systemd[1]: Started sshd@17-172.31.17.142:22-139.178.89.65:56124.service - OpenSSH per-connection server daemon (139.178.89.65:56124). Sep 4 23:47:05.795844 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 56124 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:05.798401 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:05.806375 systemd-logind[1928]: New session 18 of user core. Sep 4 23:47:05.827633 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:47:06.155304 sshd[4932]: Connection closed by 139.178.89.65 port 56124 Sep 4 23:47:06.156216 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:06.166167 systemd[1]: sshd@17-172.31.17.142:22-139.178.89.65:56124.service: Deactivated successfully. Sep 4 23:47:06.166747 systemd-logind[1928]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:47:06.172278 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:47:06.174725 systemd-logind[1928]: Removed session 18. Sep 4 23:47:06.199951 systemd[1]: Started sshd@18-172.31.17.142:22-139.178.89.65:56132.service - OpenSSH per-connection server daemon (139.178.89.65:56132). Sep 4 23:47:06.379559 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 56132 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:06.382207 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:06.390267 systemd-logind[1928]: New session 19 of user core. Sep 4 23:47:06.401327 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:47:07.332387 sshd[4943]: Connection closed by 139.178.89.65 port 56132 Sep 4 23:47:07.333022 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:07.345450 systemd[1]: sshd@18-172.31.17.142:22-139.178.89.65:56132.service: Deactivated successfully. Sep 4 23:47:07.357191 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:47:07.362906 systemd-logind[1928]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:47:07.395082 systemd[1]: Started sshd@19-172.31.17.142:22-139.178.89.65:56142.service - OpenSSH per-connection server daemon (139.178.89.65:56142). Sep 4 23:47:07.397531 systemd-logind[1928]: Removed session 19. Sep 4 23:47:07.577659 sshd[4959]: Accepted publickey for core from 139.178.89.65 port 56142 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:07.580650 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:07.590501 systemd-logind[1928]: New session 20 of user core. Sep 4 23:47:07.595363 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:47:08.100543 sshd[4962]: Connection closed by 139.178.89.65 port 56142 Sep 4 23:47:08.100420 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:08.107621 systemd-logind[1928]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:47:08.109051 systemd[1]: sshd@19-172.31.17.142:22-139.178.89.65:56142.service: Deactivated successfully. Sep 4 23:47:08.115212 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:47:08.118633 systemd-logind[1928]: Removed session 20. Sep 4 23:47:08.147027 systemd[1]: Started sshd@20-172.31.17.142:22-139.178.89.65:56144.service - OpenSSH per-connection server daemon (139.178.89.65:56144). Sep 4 23:47:08.335460 sshd[4972]: Accepted publickey for core from 139.178.89.65 port 56144 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:08.338187 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:08.347229 systemd-logind[1928]: New session 21 of user core. Sep 4 23:47:08.352350 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:47:08.602180 sshd[4974]: Connection closed by 139.178.89.65 port 56144 Sep 4 23:47:08.602014 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:08.608874 systemd[1]: sshd@20-172.31.17.142:22-139.178.89.65:56144.service: Deactivated successfully. Sep 4 23:47:08.612970 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:47:08.615159 systemd-logind[1928]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:47:08.617878 systemd-logind[1928]: Removed session 21. Sep 4 23:47:13.650009 systemd[1]: Started sshd@21-172.31.17.142:22-139.178.89.65:52162.service - OpenSSH per-connection server daemon (139.178.89.65:52162). Sep 4 23:47:13.825846 sshd[4986]: Accepted publickey for core from 139.178.89.65 port 52162 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:13.828733 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:13.838441 systemd-logind[1928]: New session 22 of user core. Sep 4 23:47:13.846350 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:47:14.087507 sshd[4989]: Connection closed by 139.178.89.65 port 52162 Sep 4 23:47:14.091083 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:14.103344 systemd[1]: sshd@21-172.31.17.142:22-139.178.89.65:52162.service: Deactivated successfully. Sep 4 23:47:14.109994 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:47:14.113295 systemd-logind[1928]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:47:14.115662 systemd-logind[1928]: Removed session 22. Sep 4 23:47:19.136558 systemd[1]: Started sshd@22-172.31.17.142:22-139.178.89.65:52166.service - OpenSSH per-connection server daemon (139.178.89.65:52166). Sep 4 23:47:19.319663 sshd[5004]: Accepted publickey for core from 139.178.89.65 port 52166 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:19.322217 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:19.330976 systemd-logind[1928]: New session 23 of user core. Sep 4 23:47:19.345346 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:47:19.581143 sshd[5006]: Connection closed by 139.178.89.65 port 52166 Sep 4 23:47:19.581961 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:19.589996 systemd[1]: sshd@22-172.31.17.142:22-139.178.89.65:52166.service: Deactivated successfully. Sep 4 23:47:19.594029 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:47:19.597009 systemd-logind[1928]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:47:19.599396 systemd-logind[1928]: Removed session 23. Sep 4 23:47:24.625595 systemd[1]: Started sshd@23-172.31.17.142:22-139.178.89.65:45346.service - OpenSSH per-connection server daemon (139.178.89.65:45346). Sep 4 23:47:24.810339 sshd[5017]: Accepted publickey for core from 139.178.89.65 port 45346 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:24.812161 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:24.820917 systemd-logind[1928]: New session 24 of user core. Sep 4 23:47:24.829361 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:47:25.083453 sshd[5019]: Connection closed by 139.178.89.65 port 45346 Sep 4 23:47:25.084398 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:25.091312 systemd[1]: sshd@23-172.31.17.142:22-139.178.89.65:45346.service: Deactivated successfully. Sep 4 23:47:25.095465 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:47:25.099195 systemd-logind[1928]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:47:25.102738 systemd-logind[1928]: Removed session 24. Sep 4 23:47:30.127653 systemd[1]: Started sshd@24-172.31.17.142:22-139.178.89.65:53076.service - OpenSSH per-connection server daemon (139.178.89.65:53076). Sep 4 23:47:30.321806 sshd[5033]: Accepted publickey for core from 139.178.89.65 port 53076 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:30.324541 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:30.333862 systemd-logind[1928]: New session 25 of user core. Sep 4 23:47:30.339495 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:47:30.579566 sshd[5035]: Connection closed by 139.178.89.65 port 53076 Sep 4 23:47:30.580495 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:30.585908 systemd-logind[1928]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:47:30.588262 systemd[1]: sshd@24-172.31.17.142:22-139.178.89.65:53076.service: Deactivated successfully. Sep 4 23:47:30.592971 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:47:30.595521 systemd-logind[1928]: Removed session 25. Sep 4 23:47:30.620593 systemd[1]: Started sshd@25-172.31.17.142:22-139.178.89.65:53092.service - OpenSSH per-connection server daemon (139.178.89.65:53092). Sep 4 23:47:30.813689 sshd[5046]: Accepted publickey for core from 139.178.89.65 port 53092 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:30.816203 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:30.825060 systemd-logind[1928]: New session 26 of user core. Sep 4 23:47:30.839367 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:47:34.143471 containerd[1951]: time="2025-09-04T23:47:34.143128355Z" level=info msg="StopContainer for \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\" with timeout 30 (s)" Sep 4 23:47:34.146263 containerd[1951]: time="2025-09-04T23:47:34.146055491Z" level=info msg="Stop container \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\" with signal terminated" Sep 4 23:47:34.189857 containerd[1951]: time="2025-09-04T23:47:34.189797963Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:47:34.194322 systemd[1]: cri-containerd-8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d.scope: Deactivated successfully. Sep 4 23:47:34.211707 containerd[1951]: time="2025-09-04T23:47:34.211626167Z" level=info msg="StopContainer for \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\" with timeout 2 (s)" Sep 4 23:47:34.212599 containerd[1951]: time="2025-09-04T23:47:34.212337167Z" level=info msg="Stop container \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\" with signal terminated" Sep 4 23:47:34.237109 systemd-networkd[1864]: lxc_health: Link DOWN Sep 4 23:47:34.238641 systemd-networkd[1864]: lxc_health: Lost carrier Sep 4 23:47:34.276493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d-rootfs.mount: Deactivated successfully. Sep 4 23:47:34.279123 systemd[1]: cri-containerd-857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f.scope: Deactivated successfully. Sep 4 23:47:34.280565 systemd[1]: cri-containerd-857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f.scope: Consumed 14.559s CPU time, 124.5M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:47:34.293608 containerd[1951]: time="2025-09-04T23:47:34.293243519Z" level=info msg="shim disconnected" id=8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d namespace=k8s.io Sep 4 23:47:34.293608 containerd[1951]: time="2025-09-04T23:47:34.293341271Z" level=warning msg="cleaning up after shim disconnected" id=8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d namespace=k8s.io Sep 4 23:47:34.293608 containerd[1951]: time="2025-09-04T23:47:34.293362679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:34.333614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f-rootfs.mount: Deactivated successfully. Sep 4 23:47:34.338466 containerd[1951]: time="2025-09-04T23:47:34.338286708Z" level=info msg="StopContainer for \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\" returns successfully" Sep 4 23:47:34.342111 containerd[1951]: time="2025-09-04T23:47:34.339444156Z" level=info msg="StopPodSandbox for \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\"" Sep 4 23:47:34.342111 containerd[1951]: time="2025-09-04T23:47:34.339506976Z" level=info msg="Container to stop \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:34.346413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391-shm.mount: Deactivated successfully. Sep 4 23:47:34.349090 containerd[1951]: time="2025-09-04T23:47:34.348670584Z" level=info msg="shim disconnected" id=857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f namespace=k8s.io Sep 4 23:47:34.349090 containerd[1951]: time="2025-09-04T23:47:34.348952236Z" level=warning msg="cleaning up after shim disconnected" id=857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f namespace=k8s.io Sep 4 23:47:34.349090 containerd[1951]: time="2025-09-04T23:47:34.348975780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:34.357514 systemd[1]: cri-containerd-6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391.scope: Deactivated successfully. Sep 4 23:47:34.380522 containerd[1951]: time="2025-09-04T23:47:34.380370528Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:47:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:47:34.389474 containerd[1951]: time="2025-09-04T23:47:34.389308884Z" level=info msg="StopContainer for \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\" returns successfully" Sep 4 23:47:34.390294 containerd[1951]: time="2025-09-04T23:47:34.390129576Z" level=info msg="StopPodSandbox for \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\"" Sep 4 23:47:34.390549 containerd[1951]: time="2025-09-04T23:47:34.390474792Z" level=info msg="Container to stop \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:34.390819 containerd[1951]: time="2025-09-04T23:47:34.390513984Z" level=info msg="Container to stop \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:34.390819 containerd[1951]: time="2025-09-04T23:47:34.390755352Z" level=info msg="Container to stop \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:34.390819 containerd[1951]: time="2025-09-04T23:47:34.390781008Z" level=info msg="Container to stop \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:34.391196 containerd[1951]: time="2025-09-04T23:47:34.391115004Z" level=info msg="Container to stop \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:34.396602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f-shm.mount: Deactivated successfully. Sep 4 23:47:34.409778 systemd[1]: cri-containerd-7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f.scope: Deactivated successfully. Sep 4 23:47:34.428932 containerd[1951]: time="2025-09-04T23:47:34.428570352Z" level=info msg="shim disconnected" id=6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391 namespace=k8s.io Sep 4 23:47:34.428932 containerd[1951]: time="2025-09-04T23:47:34.428655804Z" level=warning msg="cleaning up after shim disconnected" id=6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391 namespace=k8s.io Sep 4 23:47:34.428932 containerd[1951]: time="2025-09-04T23:47:34.428680188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:34.467240 containerd[1951]: time="2025-09-04T23:47:34.466658940Z" level=info msg="TearDown network for sandbox \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" successfully" Sep 4 23:47:34.467240 containerd[1951]: time="2025-09-04T23:47:34.466713324Z" level=info msg="StopPodSandbox for \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" returns successfully" Sep 4 23:47:34.478114 containerd[1951]: time="2025-09-04T23:47:34.477397176Z" level=info msg="shim disconnected" id=7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f namespace=k8s.io Sep 4 23:47:34.478114 containerd[1951]: time="2025-09-04T23:47:34.477479832Z" level=warning msg="cleaning up after shim disconnected" id=7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f namespace=k8s.io Sep 4 23:47:34.478114 containerd[1951]: time="2025-09-04T23:47:34.477562464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:34.506195 containerd[1951]: time="2025-09-04T23:47:34.506112613Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:47:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:47:34.508533 containerd[1951]: time="2025-09-04T23:47:34.508488037Z" level=info msg="TearDown network for sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" successfully" Sep 4 23:47:34.508533 containerd[1951]: time="2025-09-04T23:47:34.508568809Z" level=info msg="StopPodSandbox for \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" returns successfully" Sep 4 23:47:34.555160 kubelet[3221]: I0904 23:47:34.554956 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2g9m\" (UniqueName: \"kubernetes.io/projected/8805b2d1-bdf7-45a8-a336-297fc8e02399-kube-api-access-c2g9m\") pod \"8805b2d1-bdf7-45a8-a336-297fc8e02399\" (UID: \"8805b2d1-bdf7-45a8-a336-297fc8e02399\") " Sep 4 23:47:34.555742 kubelet[3221]: I0904 23:47:34.555182 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8805b2d1-bdf7-45a8-a336-297fc8e02399-cilium-config-path\") pod \"8805b2d1-bdf7-45a8-a336-297fc8e02399\" (UID: \"8805b2d1-bdf7-45a8-a336-297fc8e02399\") " Sep 4 23:47:34.560326 kubelet[3221]: I0904 23:47:34.560233 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8805b2d1-bdf7-45a8-a336-297fc8e02399-kube-api-access-c2g9m" (OuterVolumeSpecName: "kube-api-access-c2g9m") pod "8805b2d1-bdf7-45a8-a336-297fc8e02399" (UID: "8805b2d1-bdf7-45a8-a336-297fc8e02399"). InnerVolumeSpecName "kube-api-access-c2g9m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:47:34.562217 kubelet[3221]: I0904 23:47:34.562166 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8805b2d1-bdf7-45a8-a336-297fc8e02399-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8805b2d1-bdf7-45a8-a336-297fc8e02399" (UID: "8805b2d1-bdf7-45a8-a336-297fc8e02399"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:47:34.636771 kubelet[3221]: I0904 23:47:34.636513 3221 scope.go:117] "RemoveContainer" containerID="8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d" Sep 4 23:47:34.642988 containerd[1951]: time="2025-09-04T23:47:34.642425965Z" level=info msg="RemoveContainer for \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\"" Sep 4 23:47:34.657254 containerd[1951]: time="2025-09-04T23:47:34.656114125Z" level=info msg="RemoveContainer for \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\" returns successfully" Sep 4 23:47:34.658829 kubelet[3221]: I0904 23:47:34.656947 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-net\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.658829 kubelet[3221]: I0904 23:47:34.656993 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-bpf-maps\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.658829 kubelet[3221]: I0904 23:47:34.657030 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-run\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.658829 kubelet[3221]: I0904 23:47:34.657093 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cni-path\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.658829 kubelet[3221]: I0904 23:47:34.657142 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-kernel\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.658829 kubelet[3221]: I0904 23:47:34.657197 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-hubble-tls\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.660998 kubelet[3221]: I0904 23:47:34.657239 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-config-path\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.660998 kubelet[3221]: I0904 23:47:34.657277 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-xtables-lock\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.660998 kubelet[3221]: I0904 23:47:34.657319 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/715e28ad-7110-413f-a3ae-80efb70c2168-clustermesh-secrets\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.660998 kubelet[3221]: I0904 23:47:34.657355 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-lib-modules\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.660998 kubelet[3221]: I0904 23:47:34.657388 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-hostproc\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.660998 kubelet[3221]: I0904 23:47:34.657423 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-etc-cni-netd\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.660720 systemd[1]: Removed slice kubepods-besteffort-pod8805b2d1_bdf7_45a8_a336_297fc8e02399.slice - libcontainer container kubepods-besteffort-pod8805b2d1_bdf7_45a8_a336_297fc8e02399.slice. Sep 4 23:47:34.662807 kubelet[3221]: I0904 23:47:34.657455 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-cgroup\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.662807 kubelet[3221]: I0904 23:47:34.657491 3221 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftnmx\" (UniqueName: \"kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-kube-api-access-ftnmx\") pod \"715e28ad-7110-413f-a3ae-80efb70c2168\" (UID: \"715e28ad-7110-413f-a3ae-80efb70c2168\") " Sep 4 23:47:34.662807 kubelet[3221]: I0904 23:47:34.657562 3221 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c2g9m\" (UniqueName: \"kubernetes.io/projected/8805b2d1-bdf7-45a8-a336-297fc8e02399-kube-api-access-c2g9m\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.662807 kubelet[3221]: I0904 23:47:34.657587 3221 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8805b2d1-bdf7-45a8-a336-297fc8e02399-cilium-config-path\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.663855 kubelet[3221]: I0904 23:47:34.663462 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.663855 kubelet[3221]: I0904 23:47:34.663555 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.663855 kubelet[3221]: I0904 23:47:34.663595 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.663855 kubelet[3221]: I0904 23:47:34.663630 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cni-path" (OuterVolumeSpecName: "cni-path") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.663855 kubelet[3221]: I0904 23:47:34.663664 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.665116 kubelet[3221]: I0904 23:47:34.664477 3221 scope.go:117] "RemoveContainer" containerID="8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d" Sep 4 23:47:34.665116 kubelet[3221]: I0904 23:47:34.665007 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.665305 containerd[1951]: time="2025-09-04T23:47:34.664881613Z" level=error msg="ContainerStatus for \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\": not found" Sep 4 23:47:34.666385 kubelet[3221]: I0904 23:47:34.666324 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-hostproc" (OuterVolumeSpecName: "hostproc") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.666560 kubelet[3221]: I0904 23:47:34.666407 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.666560 kubelet[3221]: I0904 23:47:34.666451 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.666959 kubelet[3221]: E0904 23:47:34.666749 3221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\": not found" containerID="8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d" Sep 4 23:47:34.666959 kubelet[3221]: I0904 23:47:34.666800 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:34.666959 kubelet[3221]: I0904 23:47:34.666811 3221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d"} err="failed to get container status \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8473e2fca71eb93120d9a4b65c4c34d14964526d9ba822a915bf2de32251491d\": not found" Sep 4 23:47:34.666959 kubelet[3221]: I0904 23:47:34.666924 3221 scope.go:117] "RemoveContainer" containerID="857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f" Sep 4 23:47:34.675588 containerd[1951]: time="2025-09-04T23:47:34.675421297Z" level=info msg="RemoveContainer for \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\"" Sep 4 23:47:34.676478 kubelet[3221]: I0904 23:47:34.676325 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-kube-api-access-ftnmx" (OuterVolumeSpecName: "kube-api-access-ftnmx") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "kube-api-access-ftnmx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:47:34.679748 kubelet[3221]: I0904 23:47:34.679492 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:47:34.679748 kubelet[3221]: I0904 23:47:34.679611 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:47:34.684156 kubelet[3221]: I0904 23:47:34.683792 3221 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715e28ad-7110-413f-a3ae-80efb70c2168-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "715e28ad-7110-413f-a3ae-80efb70c2168" (UID: "715e28ad-7110-413f-a3ae-80efb70c2168"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:47:34.684724 containerd[1951]: time="2025-09-04T23:47:34.683835781Z" level=info msg="RemoveContainer for \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\" returns successfully" Sep 4 23:47:34.685243 kubelet[3221]: I0904 23:47:34.685212 3221 scope.go:117] "RemoveContainer" containerID="13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e" Sep 4 23:47:34.691533 containerd[1951]: time="2025-09-04T23:47:34.691363573Z" level=info msg="RemoveContainer for \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\"" Sep 4 23:47:34.704731 containerd[1951]: time="2025-09-04T23:47:34.704435269Z" level=info msg="RemoveContainer for \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\" returns successfully" Sep 4 23:47:34.705283 kubelet[3221]: I0904 23:47:34.705108 3221 scope.go:117] "RemoveContainer" containerID="a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395" Sep 4 23:47:34.709108 containerd[1951]: time="2025-09-04T23:47:34.708973298Z" level=info msg="RemoveContainer for \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\"" Sep 4 23:47:34.715575 containerd[1951]: time="2025-09-04T23:47:34.715500578Z" level=info msg="RemoveContainer for \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\" returns successfully" Sep 4 23:47:34.718211 kubelet[3221]: I0904 23:47:34.718025 3221 scope.go:117] "RemoveContainer" containerID="2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb" Sep 4 23:47:34.721608 containerd[1951]: time="2025-09-04T23:47:34.721531238Z" level=info msg="RemoveContainer for \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\"" Sep 4 23:47:34.727513 containerd[1951]: time="2025-09-04T23:47:34.727440254Z" level=info msg="RemoveContainer for \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\" returns successfully" Sep 4 23:47:34.728168 kubelet[3221]: I0904 23:47:34.727939 3221 scope.go:117] "RemoveContainer" containerID="dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0" Sep 4 23:47:34.730799 containerd[1951]: time="2025-09-04T23:47:34.730394198Z" level=info msg="RemoveContainer for \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\"" Sep 4 23:47:34.736367 containerd[1951]: time="2025-09-04T23:47:34.736320302Z" level=info msg="RemoveContainer for \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\" returns successfully" Sep 4 23:47:34.737220 kubelet[3221]: I0904 23:47:34.737024 3221 scope.go:117] "RemoveContainer" containerID="857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f" Sep 4 23:47:34.738321 containerd[1951]: time="2025-09-04T23:47:34.737877530Z" level=error msg="ContainerStatus for \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\": not found" Sep 4 23:47:34.738477 kubelet[3221]: E0904 23:47:34.738104 3221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\": not found" containerID="857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f" Sep 4 23:47:34.738477 kubelet[3221]: I0904 23:47:34.738147 3221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f"} err="failed to get container status \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"857e498b0084830a81e6362b3dc9af0603682f1095d8adb53a4c9da681605c3f\": not found" Sep 4 23:47:34.738477 kubelet[3221]: I0904 23:47:34.738204 3221 scope.go:117] "RemoveContainer" containerID="13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e" Sep 4 23:47:34.738720 containerd[1951]: time="2025-09-04T23:47:34.738490010Z" level=error msg="ContainerStatus for \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\": not found" Sep 4 23:47:34.739370 kubelet[3221]: E0904 23:47:34.739119 3221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\": not found" containerID="13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e" Sep 4 23:47:34.739370 kubelet[3221]: I0904 23:47:34.739170 3221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e"} err="failed to get container status \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"13e5399f06d97aa002dc6ff138ec0d465b9b8f741c391f9de9ba6e1fca340d4e\": not found" Sep 4 23:47:34.739370 kubelet[3221]: I0904 23:47:34.739205 3221 scope.go:117] "RemoveContainer" containerID="a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395" Sep 4 23:47:34.739786 containerd[1951]: time="2025-09-04T23:47:34.739524734Z" level=error msg="ContainerStatus for \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\": not found" Sep 4 23:47:34.740264 kubelet[3221]: E0904 23:47:34.740000 3221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\": not found" containerID="a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395" Sep 4 23:47:34.740264 kubelet[3221]: I0904 23:47:34.740046 3221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395"} err="failed to get container status \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6d092037ec2b87145a178d933d493471962d587431979a97954140621b54395\": not found" Sep 4 23:47:34.740264 kubelet[3221]: I0904 23:47:34.740113 3221 scope.go:117] "RemoveContainer" containerID="2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb" Sep 4 23:47:34.740513 containerd[1951]: time="2025-09-04T23:47:34.740401598Z" level=error msg="ContainerStatus for \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\": not found" Sep 4 23:47:34.740992 kubelet[3221]: E0904 23:47:34.740798 3221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\": not found" containerID="2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb" Sep 4 23:47:34.740992 kubelet[3221]: I0904 23:47:34.740841 3221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb"} err="failed to get container status \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e5b2eb63318dd520df518dd1d77e5508dcb37302f7cce2e4cf02611711d83eb\": not found" Sep 4 23:47:34.740992 kubelet[3221]: I0904 23:47:34.740870 3221 scope.go:117] "RemoveContainer" containerID="dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0" Sep 4 23:47:34.741256 containerd[1951]: time="2025-09-04T23:47:34.741187670Z" level=error msg="ContainerStatus for \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\": not found" Sep 4 23:47:34.741734 kubelet[3221]: E0904 23:47:34.741571 3221 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\": not found" containerID="dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0" Sep 4 23:47:34.741734 kubelet[3221]: I0904 23:47:34.741615 3221 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0"} err="failed to get container status \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\": rpc error: code = NotFound desc = an error occurred when try to find container \"dff86cad6e9d3bdc8452cab1cfe6938a5c188ed777d45c4439786fbaea19faf0\": not found" Sep 4 23:47:34.758193 kubelet[3221]: I0904 23:47:34.758159 3221 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-xtables-lock\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758354 3221 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/715e28ad-7110-413f-a3ae-80efb70c2168-clustermesh-secrets\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758383 3221 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-lib-modules\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758407 3221 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-hostproc\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758428 3221 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-etc-cni-netd\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758451 3221 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-cgroup\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758475 3221 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftnmx\" (UniqueName: \"kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-kube-api-access-ftnmx\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758497 3221 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-net\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.758676 kubelet[3221]: I0904 23:47:34.758519 3221 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-bpf-maps\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.759092 kubelet[3221]: I0904 23:47:34.758539 3221 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-run\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.759092 kubelet[3221]: I0904 23:47:34.758559 3221 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-cni-path\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.759092 kubelet[3221]: I0904 23:47:34.758608 3221 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/715e28ad-7110-413f-a3ae-80efb70c2168-host-proc-sys-kernel\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.759092 kubelet[3221]: I0904 23:47:34.758629 3221 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/715e28ad-7110-413f-a3ae-80efb70c2168-hubble-tls\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.759092 kubelet[3221]: I0904 23:47:34.758650 3221 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/715e28ad-7110-413f-a3ae-80efb70c2168-cilium-config-path\") on node \"ip-172-31-17-142\" DevicePath \"\"" Sep 4 23:47:34.966246 systemd[1]: Removed slice kubepods-burstable-pod715e28ad_7110_413f_a3ae_80efb70c2168.slice - libcontainer container kubepods-burstable-pod715e28ad_7110_413f_a3ae_80efb70c2168.slice. Sep 4 23:47:34.966465 systemd[1]: kubepods-burstable-pod715e28ad_7110_413f_a3ae_80efb70c2168.slice: Consumed 14.732s CPU time, 125M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:47:35.167895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391-rootfs.mount: Deactivated successfully. Sep 4 23:47:35.168356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f-rootfs.mount: Deactivated successfully. Sep 4 23:47:35.168496 systemd[1]: var-lib-kubelet-pods-8805b2d1\x2dbdf7\x2d45a8\x2da336\x2d297fc8e02399-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc2g9m.mount: Deactivated successfully. Sep 4 23:47:35.168640 systemd[1]: var-lib-kubelet-pods-715e28ad\x2d7110\x2d413f\x2da3ae\x2d80efb70c2168-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dftnmx.mount: Deactivated successfully. Sep 4 23:47:35.168773 systemd[1]: var-lib-kubelet-pods-715e28ad\x2d7110\x2d413f\x2da3ae\x2d80efb70c2168-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:47:35.168910 systemd[1]: var-lib-kubelet-pods-715e28ad\x2d7110\x2d413f\x2da3ae\x2d80efb70c2168-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:47:36.062107 sshd[5048]: Connection closed by 139.178.89.65 port 53092 Sep 4 23:47:36.060687 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:36.065941 systemd[1]: sshd@25-172.31.17.142:22-139.178.89.65:53092.service: Deactivated successfully. Sep 4 23:47:36.069291 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:47:36.069789 systemd[1]: session-26.scope: Consumed 2.506s CPU time, 25.8M memory peak. Sep 4 23:47:36.072955 systemd-logind[1928]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:47:36.075816 systemd-logind[1928]: Removed session 26. Sep 4 23:47:36.103617 systemd[1]: Started sshd@26-172.31.17.142:22-139.178.89.65:53104.service - OpenSSH per-connection server daemon (139.178.89.65:53104). Sep 4 23:47:36.158713 kubelet[3221]: I0904 23:47:36.158646 3221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715e28ad-7110-413f-a3ae-80efb70c2168" path="/var/lib/kubelet/pods/715e28ad-7110-413f-a3ae-80efb70c2168/volumes" Sep 4 23:47:36.160901 kubelet[3221]: I0904 23:47:36.160834 3221 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8805b2d1-bdf7-45a8-a336-297fc8e02399" path="/var/lib/kubelet/pods/8805b2d1-bdf7-45a8-a336-297fc8e02399/volumes" Sep 4 23:47:36.285653 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 53104 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:36.288365 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:36.297231 systemd-logind[1928]: New session 27 of user core. Sep 4 23:47:36.304362 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:47:36.519517 ntpd[1923]: Deleting interface #11 lxc_health, fe80::48ef:c9ff:fea3:92df%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Sep 4 23:47:36.520393 ntpd[1923]: 4 Sep 23:47:36 ntpd[1923]: Deleting interface #11 lxc_health, fe80::48ef:c9ff:fea3:92df%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Sep 4 23:47:37.804235 sshd[5214]: Connection closed by 139.178.89.65 port 53104 Sep 4 23:47:37.805366 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:37.815606 systemd[1]: sshd@26-172.31.17.142:22-139.178.89.65:53104.service: Deactivated successfully. Sep 4 23:47:37.822571 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:47:37.823205 systemd[1]: session-27.scope: Consumed 1.290s CPU time, 25.6M memory peak. Sep 4 23:47:37.827167 systemd-logind[1928]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:47:37.853849 systemd[1]: Started sshd@27-172.31.17.142:22-139.178.89.65:53108.service - OpenSSH per-connection server daemon (139.178.89.65:53108). Sep 4 23:47:37.858006 systemd-logind[1928]: Removed session 27. Sep 4 23:47:37.871398 kubelet[3221]: I0904 23:47:37.869550 3221 memory_manager.go:355] "RemoveStaleState removing state" podUID="8805b2d1-bdf7-45a8-a336-297fc8e02399" containerName="cilium-operator" Sep 4 23:47:37.871398 kubelet[3221]: I0904 23:47:37.869592 3221 memory_manager.go:355] "RemoveStaleState removing state" podUID="715e28ad-7110-413f-a3ae-80efb70c2168" containerName="cilium-agent" Sep 4 23:47:37.901255 systemd[1]: Created slice kubepods-burstable-pod5e009943_e860_4093_be0f_d9133e809419.slice - libcontainer container kubepods-burstable-pod5e009943_e860_4093_be0f_d9133e809419.slice. Sep 4 23:47:37.979905 kubelet[3221]: I0904 23:47:37.979830 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-etc-cni-netd\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.979905 kubelet[3221]: I0904 23:47:37.979909 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-host-proc-sys-net\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980187 kubelet[3221]: I0904 23:47:37.979967 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-host-proc-sys-kernel\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980187 kubelet[3221]: I0904 23:47:37.980012 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e009943-e860-4093-be0f-d9133e809419-cilium-config-path\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980187 kubelet[3221]: I0904 23:47:37.980050 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q2n4\" (UniqueName: \"kubernetes.io/projected/5e009943-e860-4093-be0f-d9133e809419-kube-api-access-9q2n4\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980187 kubelet[3221]: I0904 23:47:37.980126 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-xtables-lock\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980187 kubelet[3221]: I0904 23:47:37.980166 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e009943-e860-4093-be0f-d9133e809419-clustermesh-secrets\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980420 kubelet[3221]: I0904 23:47:37.980208 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-cilium-run\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980420 kubelet[3221]: I0904 23:47:37.980248 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-bpf-maps\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980420 kubelet[3221]: I0904 23:47:37.980288 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e009943-e860-4093-be0f-d9133e809419-hubble-tls\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980420 kubelet[3221]: I0904 23:47:37.980323 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-hostproc\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980420 kubelet[3221]: I0904 23:47:37.980361 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-cni-path\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980420 kubelet[3221]: I0904 23:47:37.980394 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e009943-e860-4093-be0f-d9133e809419-cilium-ipsec-secrets\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980705 kubelet[3221]: I0904 23:47:37.980430 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-cilium-cgroup\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:37.980705 kubelet[3221]: I0904 23:47:37.980467 3221 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e009943-e860-4093-be0f-d9133e809419-lib-modules\") pod \"cilium-dcmm8\" (UID: \"5e009943-e860-4093-be0f-d9133e809419\") " pod="kube-system/cilium-dcmm8" Sep 4 23:47:38.065013 sshd[5224]: Accepted publickey for core from 139.178.89.65 port 53108 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:38.068269 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:38.075962 systemd-logind[1928]: New session 28 of user core. Sep 4 23:47:38.084778 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:47:38.212887 containerd[1951]: time="2025-09-04T23:47:38.212840559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcmm8,Uid:5e009943-e860-4093-be0f-d9133e809419,Namespace:kube-system,Attempt:0,}" Sep 4 23:47:38.262406 sshd[5230]: Connection closed by 139.178.89.65 port 53108 Sep 4 23:47:38.259871 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:38.271571 systemd[1]: sshd@27-172.31.17.142:22-139.178.89.65:53108.service: Deactivated successfully. Sep 4 23:47:38.279092 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:47:38.282791 containerd[1951]: time="2025-09-04T23:47:38.280421799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:47:38.282791 containerd[1951]: time="2025-09-04T23:47:38.282130551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:47:38.282791 containerd[1951]: time="2025-09-04T23:47:38.282188643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:38.282791 containerd[1951]: time="2025-09-04T23:47:38.282359523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:38.283805 systemd-logind[1928]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:47:38.323642 systemd[1]: Started sshd@28-172.31.17.142:22-139.178.89.65:53114.service - OpenSSH per-connection server daemon (139.178.89.65:53114). Sep 4 23:47:38.327284 systemd-logind[1928]: Removed session 28. Sep 4 23:47:38.338867 systemd[1]: Started cri-containerd-7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb.scope - libcontainer container 7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb. Sep 4 23:47:38.381112 kubelet[3221]: E0904 23:47:38.380425 3221 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:47:38.414578 containerd[1951]: time="2025-09-04T23:47:38.414429424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcmm8,Uid:5e009943-e860-4093-be0f-d9133e809419,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\"" Sep 4 23:47:38.420491 containerd[1951]: time="2025-09-04T23:47:38.420416992Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:47:38.447949 containerd[1951]: time="2025-09-04T23:47:38.447804760Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77\"" Sep 4 23:47:38.449120 containerd[1951]: time="2025-09-04T23:47:38.448953136Z" level=info msg="StartContainer for \"e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77\"" Sep 4 23:47:38.498398 systemd[1]: Started cri-containerd-e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77.scope - libcontainer container e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77. Sep 4 23:47:38.536230 sshd[5263]: Accepted publickey for core from 139.178.89.65 port 53114 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:38.538585 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:38.554523 systemd-logind[1928]: New session 29 of user core. Sep 4 23:47:38.563423 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 23:47:38.577619 containerd[1951]: time="2025-09-04T23:47:38.577473497Z" level=info msg="StartContainer for \"e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77\" returns successfully" Sep 4 23:47:38.598949 systemd[1]: cri-containerd-e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77.scope: Deactivated successfully. Sep 4 23:47:38.655090 containerd[1951]: time="2025-09-04T23:47:38.654974621Z" level=info msg="shim disconnected" id=e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77 namespace=k8s.io Sep 4 23:47:38.655090 containerd[1951]: time="2025-09-04T23:47:38.655055117Z" level=warning msg="cleaning up after shim disconnected" id=e9609da8735c32ef46f617adafb7a7dd65243871fa86a9139cb0982e3c4b6b77 namespace=k8s.io Sep 4 23:47:38.655412 containerd[1951]: time="2025-09-04T23:47:38.655099025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:39.685025 containerd[1951]: time="2025-09-04T23:47:39.684664110Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:47:39.725963 containerd[1951]: time="2025-09-04T23:47:39.725885286Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47\"" Sep 4 23:47:39.727699 containerd[1951]: time="2025-09-04T23:47:39.727628742Z" level=info msg="StartContainer for \"20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47\"" Sep 4 23:47:39.786382 systemd[1]: Started cri-containerd-20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47.scope - libcontainer container 20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47. Sep 4 23:47:39.842435 containerd[1951]: time="2025-09-04T23:47:39.842162227Z" level=info msg="StartContainer for \"20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47\" returns successfully" Sep 4 23:47:39.861941 systemd[1]: cri-containerd-20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47.scope: Deactivated successfully. Sep 4 23:47:39.911054 containerd[1951]: time="2025-09-04T23:47:39.910948531Z" level=info msg="shim disconnected" id=20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47 namespace=k8s.io Sep 4 23:47:39.911054 containerd[1951]: time="2025-09-04T23:47:39.911026003Z" level=warning msg="cleaning up after shim disconnected" id=20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47 namespace=k8s.io Sep 4 23:47:39.911054 containerd[1951]: time="2025-09-04T23:47:39.911048359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:40.100334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20c12c0c83160f1d68d4d42113fedf654c362091cc8f30fa1ed75bc0cc721b47-rootfs.mount: Deactivated successfully. Sep 4 23:47:40.155125 kubelet[3221]: E0904 23:47:40.153033 3221 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-zlbnm" podUID="c1541a0a-d772-448b-b3e1-3bee7042bfa8" Sep 4 23:47:40.687626 containerd[1951]: time="2025-09-04T23:47:40.687240739Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:47:40.718453 containerd[1951]: time="2025-09-04T23:47:40.718371355Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d\"" Sep 4 23:47:40.724337 containerd[1951]: time="2025-09-04T23:47:40.724257091Z" level=info msg="StartContainer for \"528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d\"" Sep 4 23:47:40.788617 systemd[1]: Started cri-containerd-528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d.scope - libcontainer container 528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d. Sep 4 23:47:40.872337 containerd[1951]: time="2025-09-04T23:47:40.872117456Z" level=info msg="StartContainer for \"528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d\" returns successfully" Sep 4 23:47:40.889462 systemd[1]: cri-containerd-528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d.scope: Deactivated successfully. Sep 4 23:47:40.944113 containerd[1951]: time="2025-09-04T23:47:40.943834868Z" level=info msg="shim disconnected" id=528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d namespace=k8s.io Sep 4 23:47:40.944113 containerd[1951]: time="2025-09-04T23:47:40.943944596Z" level=warning msg="cleaning up after shim disconnected" id=528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d namespace=k8s.io Sep 4 23:47:40.944737 containerd[1951]: time="2025-09-04T23:47:40.943966184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:41.098264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-528437d35303559047c6168c004a9a16b03a2a4890959b69fd105b95f2d9fa8d-rootfs.mount: Deactivated successfully. Sep 4 23:47:41.153547 kubelet[3221]: I0904 23:47:41.153465 3221 setters.go:602] "Node became not ready" node="ip-172-31-17-142" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T23:47:41Z","lastTransitionTime":"2025-09-04T23:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 23:47:41.702055 containerd[1951]: time="2025-09-04T23:47:41.701871152Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:47:41.736205 containerd[1951]: time="2025-09-04T23:47:41.736139996Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a\"" Sep 4 23:47:41.738215 containerd[1951]: time="2025-09-04T23:47:41.738113372Z" level=info msg="StartContainer for \"a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a\"" Sep 4 23:47:41.796362 systemd[1]: Started cri-containerd-a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a.scope - libcontainer container a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a. Sep 4 23:47:41.852690 systemd[1]: cri-containerd-a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a.scope: Deactivated successfully. Sep 4 23:47:41.860808 containerd[1951]: time="2025-09-04T23:47:41.860348325Z" level=info msg="StartContainer for \"a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a\" returns successfully" Sep 4 23:47:41.908548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a-rootfs.mount: Deactivated successfully. Sep 4 23:47:41.917857 containerd[1951]: time="2025-09-04T23:47:41.917488677Z" level=info msg="shim disconnected" id=a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a namespace=k8s.io Sep 4 23:47:41.917857 containerd[1951]: time="2025-09-04T23:47:41.917588013Z" level=warning msg="cleaning up after shim disconnected" id=a43489c75c5d2d6b5466476b23f9535419bd228c0a5e522fd6e97a2573f03d1a namespace=k8s.io Sep 4 23:47:41.917857 containerd[1951]: time="2025-09-04T23:47:41.917610693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:42.154230 kubelet[3221]: E0904 23:47:42.153404 3221 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-zlbnm" podUID="c1541a0a-d772-448b-b3e1-3bee7042bfa8" Sep 4 23:47:42.707953 containerd[1951]: time="2025-09-04T23:47:42.707649081Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:47:42.743342 containerd[1951]: time="2025-09-04T23:47:42.743286189Z" level=info msg="CreateContainer within sandbox \"7a127e3345e75d78bc73f43a412a48907e14dae81a0605b6afb1b8ac6a94f7fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864\"" Sep 4 23:47:42.746227 containerd[1951]: time="2025-09-04T23:47:42.744516981Z" level=info msg="StartContainer for \"03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864\"" Sep 4 23:47:42.746109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981120774.mount: Deactivated successfully. Sep 4 23:47:42.818455 systemd[1]: Started cri-containerd-03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864.scope - libcontainer container 03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864. Sep 4 23:47:42.894194 containerd[1951]: time="2025-09-04T23:47:42.894125914Z" level=info msg="StartContainer for \"03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864\" returns successfully" Sep 4 23:47:43.733893 systemd[1]: run-containerd-runc-k8s.io-03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864-runc.aOupUu.mount: Deactivated successfully. Sep 4 23:47:43.793138 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 23:47:47.967668 systemd-networkd[1864]: lxc_health: Link UP Sep 4 23:47:47.981095 (udev-worker)[6076]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:47:48.008956 systemd-networkd[1864]: lxc_health: Gained carrier Sep 4 23:47:48.119509 containerd[1951]: time="2025-09-04T23:47:48.119459592Z" level=info msg="StopPodSandbox for \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\"" Sep 4 23:47:48.122237 containerd[1951]: time="2025-09-04T23:47:48.120283032Z" level=info msg="TearDown network for sandbox \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" successfully" Sep 4 23:47:48.122237 containerd[1951]: time="2025-09-04T23:47:48.120346476Z" level=info msg="StopPodSandbox for \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" returns successfully" Sep 4 23:47:48.122904 containerd[1951]: time="2025-09-04T23:47:48.122847888Z" level=info msg="RemovePodSandbox for \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\"" Sep 4 23:47:48.123049 containerd[1951]: time="2025-09-04T23:47:48.122910180Z" level=info msg="Forcibly stopping sandbox \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\"" Sep 4 23:47:48.123049 containerd[1951]: time="2025-09-04T23:47:48.123033552Z" level=info msg="TearDown network for sandbox \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" successfully" Sep 4 23:47:48.131498 containerd[1951]: time="2025-09-04T23:47:48.131407836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:47:48.134041 containerd[1951]: time="2025-09-04T23:47:48.133937448Z" level=info msg="RemovePodSandbox \"6b4cff6236572f1c4d382473d6eb8fc890aa0a5407d197eb8846b416d0379391\" returns successfully" Sep 4 23:47:48.143178 containerd[1951]: time="2025-09-04T23:47:48.141050892Z" level=info msg="StopPodSandbox for \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\"" Sep 4 23:47:48.143381 containerd[1951]: time="2025-09-04T23:47:48.143311728Z" level=info msg="TearDown network for sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" successfully" Sep 4 23:47:48.143451 containerd[1951]: time="2025-09-04T23:47:48.143374524Z" level=info msg="StopPodSandbox for \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" returns successfully" Sep 4 23:47:48.144163 containerd[1951]: time="2025-09-04T23:47:48.144100764Z" level=info msg="RemovePodSandbox for \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\"" Sep 4 23:47:48.144302 containerd[1951]: time="2025-09-04T23:47:48.144176412Z" level=info msg="Forcibly stopping sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\"" Sep 4 23:47:48.144362 containerd[1951]: time="2025-09-04T23:47:48.144305256Z" level=info msg="TearDown network for sandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" successfully" Sep 4 23:47:48.172315 containerd[1951]: time="2025-09-04T23:47:48.171227184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:47:48.172315 containerd[1951]: time="2025-09-04T23:47:48.171337296Z" level=info msg="RemovePodSandbox \"7a6c8b68822318f6da77d5c57ce799f131fa8060c43e81c889821f4182cf692f\" returns successfully" Sep 4 23:47:48.249230 kubelet[3221]: I0904 23:47:48.248989 3221 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dcmm8" podStartSLOduration=11.248963929 podStartE2EDuration="11.248963929s" podCreationTimestamp="2025-09-04 23:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:47:43.752334226 +0000 UTC m=+115.946349900" watchObservedRunningTime="2025-09-04 23:47:48.248963929 +0000 UTC m=+120.442979603" Sep 4 23:47:49.215257 systemd-networkd[1864]: lxc_health: Gained IPv6LL Sep 4 23:47:49.686936 systemd[1]: run-containerd-runc-k8s.io-03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864-runc.X0evmw.mount: Deactivated successfully. Sep 4 23:47:51.518048 ntpd[1923]: Listen normally on 14 lxc_health [fe80::61:aaff:feff:22cf%14]:123 Sep 4 23:47:51.518609 ntpd[1923]: 4 Sep 23:47:51 ntpd[1923]: Listen normally on 14 lxc_health [fe80::61:aaff:feff:22cf%14]:123 Sep 4 23:47:52.028025 systemd[1]: run-containerd-runc-k8s.io-03d81c8510caba2ff071dbc239fc21f4703e76035af6c561fd5f1e1ceb08f864-runc.9WaXiu.mount: Deactivated successfully. Sep 4 23:47:54.419826 sshd[5310]: Connection closed by 139.178.89.65 port 53114 Sep 4 23:47:54.422825 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:54.430878 systemd[1]: sshd@28-172.31.17.142:22-139.178.89.65:53114.service: Deactivated successfully. Sep 4 23:47:54.438162 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 23:47:54.442666 systemd-logind[1928]: Session 29 logged out. Waiting for processes to exit. Sep 4 23:47:54.446765 systemd-logind[1928]: Removed session 29.