Sep 9 23:44:06.161519 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 9 23:44:06.161562 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:44:06.161586 kernel: KASLR disabled due to lack of seed Sep 9 23:44:06.161602 kernel: efi: EFI v2.7 by EDK II Sep 9 23:44:06.161618 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 9 23:44:06.161633 kernel: secureboot: Secure boot disabled Sep 9 23:44:06.161650 kernel: ACPI: Early table checksum verification disabled Sep 9 23:44:06.161666 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 9 23:44:06.161681 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 9 23:44:06.161696 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 9 23:44:06.161713 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 9 23:44:06.161734 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 9 23:44:06.161751 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 9 23:44:06.161767 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 9 23:44:06.161786 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 9 23:44:06.161803 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 9 23:44:06.161826 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 9 23:44:06.161843 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 9 23:44:06.161859 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 9 23:44:06.161875 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 9 23:44:06.161892 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 9 23:44:06.161908 kernel: printk: legacy bootconsole [uart0] enabled Sep 9 23:44:06.161925 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:44:06.161941 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 23:44:06.161958 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 9 23:44:06.161974 kernel: Zone ranges: Sep 9 23:44:06.161990 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 9 23:44:06.162010 kernel: DMA32 empty Sep 9 23:44:06.162026 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 9 23:44:06.162042 kernel: Device empty Sep 9 23:44:06.162057 kernel: Movable zone start for each node Sep 9 23:44:06.162073 kernel: Early memory node ranges Sep 9 23:44:06.162089 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 9 23:44:06.162105 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 9 23:44:06.162122 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 9 23:44:06.162138 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 9 23:44:06.162211 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 9 23:44:06.162232 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 9 23:44:06.162248 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 9 23:44:06.162274 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 9 23:44:06.162297 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 23:44:06.162314 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 9 23:44:06.162330 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 9 23:44:06.162347 kernel: psci: probing for conduit method from ACPI. Sep 9 23:44:06.162367 kernel: psci: PSCIv1.0 detected in firmware. Sep 9 23:44:06.162383 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:44:06.162400 kernel: psci: Trusted OS migration not required Sep 9 23:44:06.162416 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:44:06.162433 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 9 23:44:06.162449 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:44:06.162466 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:44:06.162483 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 23:44:06.162500 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:44:06.162516 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:44:06.162532 kernel: CPU features: detected: Spectre-v2 Sep 9 23:44:06.162552 kernel: CPU features: detected: Spectre-v3a Sep 9 23:44:06.162569 kernel: CPU features: detected: Spectre-BHB Sep 9 23:44:06.162586 kernel: CPU features: detected: ARM erratum 1742098 Sep 9 23:44:06.162602 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 9 23:44:06.162619 kernel: alternatives: applying boot alternatives Sep 9 23:44:06.162637 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:44:06.162655 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:44:06.162672 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:44:06.162689 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:44:06.162705 kernel: Fallback order for Node 0: 0 Sep 9 23:44:06.162725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 9 23:44:06.162742 kernel: Policy zone: Normal Sep 9 23:44:06.162758 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:44:06.162774 kernel: software IO TLB: area num 2. Sep 9 23:44:06.162790 kernel: software IO TLB: mapped [mem 0x000000006c600000-0x0000000070600000] (64MB) Sep 9 23:44:06.162807 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 23:44:06.162823 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:44:06.162840 kernel: rcu: RCU event tracing is enabled. Sep 9 23:44:06.162857 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 23:44:06.162874 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:44:06.162891 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:44:06.162907 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:44:06.162927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 23:44:06.162944 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:44:06.162961 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 23:44:06.162978 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:44:06.162994 kernel: GICv3: 96 SPIs implemented Sep 9 23:44:06.163010 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:44:06.163026 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:44:06.163043 kernel: GICv3: GICv3 features: 16 PPIs Sep 9 23:44:06.163059 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:44:06.163076 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 9 23:44:06.163092 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 9 23:44:06.163109 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:44:06.163130 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:44:06.163677 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 9 23:44:06.163721 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 9 23:44:06.163741 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 9 23:44:06.163760 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:44:06.163777 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 9 23:44:06.163794 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 9 23:44:06.163812 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 9 23:44:06.163830 kernel: Console: colour dummy device 80x25 Sep 9 23:44:06.163848 kernel: printk: legacy console [tty1] enabled Sep 9 23:44:06.163875 kernel: ACPI: Core revision 20240827 Sep 9 23:44:06.163893 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 9 23:44:06.163910 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:44:06.163927 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:44:06.163944 kernel: landlock: Up and running. Sep 9 23:44:06.163961 kernel: SELinux: Initializing. Sep 9 23:44:06.163978 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:44:06.163995 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:44:06.164012 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:44:06.164034 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:44:06.164053 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:44:06.164070 kernel: Remapping and enabling EFI services. Sep 9 23:44:06.164087 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:44:06.164131 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:44:06.164176 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 9 23:44:06.164198 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 9 23:44:06.164216 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 9 23:44:06.164233 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 23:44:06.164257 kernel: SMP: Total of 2 processors activated. Sep 9 23:44:06.164286 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:44:06.164304 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:44:06.164326 kernel: CPU features: detected: 32-bit EL1 Support Sep 9 23:44:06.164344 kernel: CPU features: detected: CRC32 instructions Sep 9 23:44:06.164361 kernel: alternatives: applying system-wide alternatives Sep 9 23:44:06.164381 kernel: Memory: 3797096K/4030464K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 212024K reserved, 16384K cma-reserved) Sep 9 23:44:06.164399 kernel: devtmpfs: initialized Sep 9 23:44:06.164421 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:44:06.164439 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 23:44:06.164457 kernel: 17056 pages in range for non-PLT usage Sep 9 23:44:06.164475 kernel: 508576 pages in range for PLT usage Sep 9 23:44:06.164493 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:44:06.164511 kernel: SMBIOS 3.0.0 present. Sep 9 23:44:06.164529 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 9 23:44:06.164546 kernel: DMI: Memory slots populated: 0/0 Sep 9 23:44:06.164565 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:44:06.164604 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:44:06.164625 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:44:06.164644 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:44:06.164662 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:44:06.164681 kernel: audit: type=2000 audit(0.238:1): state=initialized audit_enabled=0 res=1 Sep 9 23:44:06.164699 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:44:06.164718 kernel: cpuidle: using governor menu Sep 9 23:44:06.164738 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:44:06.164782 kernel: ASID allocator initialised with 65536 entries Sep 9 23:44:06.164829 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:44:06.164879 kernel: Serial: AMBA PL011 UART driver Sep 9 23:44:06.164916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:44:06.164939 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:44:06.164958 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:44:06.164976 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:44:06.164994 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:44:06.165011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:44:06.165030 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:44:06.165055 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:44:06.165072 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:44:06.165090 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:44:06.165108 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:44:06.165126 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:44:06.165143 kernel: ACPI: Interpreter enabled Sep 9 23:44:06.165189 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:44:06.165208 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:44:06.165226 kernel: ACPI: CPU0 has been hot-added Sep 9 23:44:06.165251 kernel: ACPI: CPU1 has been hot-added Sep 9 23:44:06.165269 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 9 23:44:06.165574 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:44:06.165765 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:44:06.165947 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:44:06.166126 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 9 23:44:06.166338 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 9 23:44:06.166371 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 9 23:44:06.166390 kernel: acpiphp: Slot [1] registered Sep 9 23:44:06.166407 kernel: acpiphp: Slot [2] registered Sep 9 23:44:06.166425 kernel: acpiphp: Slot [3] registered Sep 9 23:44:06.166442 kernel: acpiphp: Slot [4] registered Sep 9 23:44:06.166459 kernel: acpiphp: Slot [5] registered Sep 9 23:44:06.166477 kernel: acpiphp: Slot [6] registered Sep 9 23:44:06.166495 kernel: acpiphp: Slot [7] registered Sep 9 23:44:06.166512 kernel: acpiphp: Slot [8] registered Sep 9 23:44:06.166533 kernel: acpiphp: Slot [9] registered Sep 9 23:44:06.166551 kernel: acpiphp: Slot [10] registered Sep 9 23:44:06.166568 kernel: acpiphp: Slot [11] registered Sep 9 23:44:06.166586 kernel: acpiphp: Slot [12] registered Sep 9 23:44:06.166603 kernel: acpiphp: Slot [13] registered Sep 9 23:44:06.166621 kernel: acpiphp: Slot [14] registered Sep 9 23:44:06.166638 kernel: acpiphp: Slot [15] registered Sep 9 23:44:06.166656 kernel: acpiphp: Slot [16] registered Sep 9 23:44:06.166673 kernel: acpiphp: Slot [17] registered Sep 9 23:44:06.166691 kernel: acpiphp: Slot [18] registered Sep 9 23:44:06.166712 kernel: acpiphp: Slot [19] registered Sep 9 23:44:06.166730 kernel: acpiphp: Slot [20] registered Sep 9 23:44:06.166747 kernel: acpiphp: Slot [21] registered Sep 9 23:44:06.166764 kernel: acpiphp: Slot [22] registered Sep 9 23:44:06.166782 kernel: acpiphp: Slot [23] registered Sep 9 23:44:06.166799 kernel: acpiphp: Slot [24] registered Sep 9 23:44:06.166816 kernel: acpiphp: Slot [25] registered Sep 9 23:44:06.166834 kernel: acpiphp: Slot [26] registered Sep 9 23:44:06.166852 kernel: acpiphp: Slot [27] registered Sep 9 23:44:06.166873 kernel: acpiphp: Slot [28] registered Sep 9 23:44:06.166891 kernel: acpiphp: Slot [29] registered Sep 9 23:44:06.166908 kernel: acpiphp: Slot [30] registered Sep 9 23:44:06.166925 kernel: acpiphp: Slot [31] registered Sep 9 23:44:06.166943 kernel: PCI host bridge to bus 0000:00 Sep 9 23:44:06.167126 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 9 23:44:06.167321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:44:06.167488 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 9 23:44:06.167661 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 9 23:44:06.167887 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:44:06.168141 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 9 23:44:06.169368 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 9 23:44:06.169580 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 9 23:44:06.169771 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 9 23:44:06.169969 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 23:44:06.170656 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 9 23:44:06.170877 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 9 23:44:06.171070 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 9 23:44:06.171388 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 9 23:44:06.171596 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 23:44:06.171794 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 9 23:44:06.171996 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 9 23:44:06.172288 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 9 23:44:06.172484 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 9 23:44:06.172683 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 9 23:44:06.172861 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 9 23:44:06.173029 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:44:06.175476 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 9 23:44:06.175534 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:44:06.175554 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:44:06.175572 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:44:06.175591 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:44:06.175610 kernel: iommu: Default domain type: Translated Sep 9 23:44:06.175628 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:44:06.175646 kernel: efivars: Registered efivars operations Sep 9 23:44:06.175664 kernel: vgaarb: loaded Sep 9 23:44:06.175682 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:44:06.175704 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:44:06.175722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:44:06.175741 kernel: pnp: PnP ACPI init Sep 9 23:44:06.175979 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 9 23:44:06.176008 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:44:06.176026 kernel: NET: Registered PF_INET protocol family Sep 9 23:44:06.176044 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:44:06.176062 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:44:06.176080 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:44:06.176125 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:44:06.177258 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:44:06.177313 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:44:06.177333 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:44:06.177352 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:44:06.177371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:44:06.177389 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:44:06.177407 kernel: kvm [1]: HYP mode not available Sep 9 23:44:06.177425 kernel: Initialise system trusted keyrings Sep 9 23:44:06.177453 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:44:06.177471 kernel: Key type asymmetric registered Sep 9 23:44:06.177489 kernel: Asymmetric key parser 'x509' registered Sep 9 23:44:06.177506 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:44:06.177524 kernel: io scheduler mq-deadline registered Sep 9 23:44:06.177542 kernel: io scheduler kyber registered Sep 9 23:44:06.177560 kernel: io scheduler bfq registered Sep 9 23:44:06.177828 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 9 23:44:06.177869 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:44:06.177888 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:44:06.177907 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 9 23:44:06.177925 kernel: ACPI: button: Sleep Button [SLPB] Sep 9 23:44:06.177944 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:44:06.177963 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 9 23:44:06.178829 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 9 23:44:06.178878 kernel: printk: legacy console [ttyS0] disabled Sep 9 23:44:06.178898 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 9 23:44:06.178930 kernel: printk: legacy console [ttyS0] enabled Sep 9 23:44:06.178950 kernel: printk: legacy bootconsole [uart0] disabled Sep 9 23:44:06.178968 kernel: thunder_xcv, ver 1.0 Sep 9 23:44:06.178986 kernel: thunder_bgx, ver 1.0 Sep 9 23:44:06.179005 kernel: nicpf, ver 1.0 Sep 9 23:44:06.179024 kernel: nicvf, ver 1.0 Sep 9 23:44:06.179345 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:44:06.179570 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:44:05 UTC (1757461445) Sep 9 23:44:06.179613 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:44:06.179633 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 9 23:44:06.179653 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:44:06.179671 kernel: watchdog: NMI not fully supported Sep 9 23:44:06.179689 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:44:06.179707 kernel: Segment Routing with IPv6 Sep 9 23:44:06.179725 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:44:06.179743 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:44:06.179760 kernel: Key type dns_resolver registered Sep 9 23:44:06.179783 kernel: registered taskstats version 1 Sep 9 23:44:06.179801 kernel: Loading compiled-in X.509 certificates Sep 9 23:44:06.179819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:44:06.179838 kernel: Demotion targets for Node 0: null Sep 9 23:44:06.179856 kernel: Key type .fscrypt registered Sep 9 23:44:06.179873 kernel: Key type fscrypt-provisioning registered Sep 9 23:44:06.179890 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:44:06.179908 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:44:06.179926 kernel: ima: No architecture policies found Sep 9 23:44:06.179948 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:44:06.179966 kernel: clk: Disabling unused clocks Sep 9 23:44:06.179984 kernel: PM: genpd: Disabling unused power domains Sep 9 23:44:06.180001 kernel: Warning: unable to open an initial console. Sep 9 23:44:06.180019 kernel: Freeing unused kernel memory: 38912K Sep 9 23:44:06.180038 kernel: Run /init as init process Sep 9 23:44:06.180056 kernel: with arguments: Sep 9 23:44:06.180073 kernel: /init Sep 9 23:44:06.180090 kernel: with environment: Sep 9 23:44:06.180139 kernel: HOME=/ Sep 9 23:44:06.180192 kernel: TERM=linux Sep 9 23:44:06.180211 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:44:06.180231 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:44:06.180256 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:44:06.180276 systemd[1]: Detected virtualization amazon. Sep 9 23:44:06.180295 systemd[1]: Detected architecture arm64. Sep 9 23:44:06.180313 systemd[1]: Running in initrd. Sep 9 23:44:06.180338 systemd[1]: No hostname configured, using default hostname. Sep 9 23:44:06.180358 systemd[1]: Hostname set to . Sep 9 23:44:06.180377 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:44:06.180396 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:44:06.180415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:44:06.180434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:44:06.180455 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:44:06.180475 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:44:06.180498 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:44:06.180519 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:44:06.180541 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:44:06.180560 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:44:06.180580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:44:06.180599 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:44:06.180619 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:44:06.180643 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:44:06.180663 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:44:06.180682 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:44:06.180702 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:44:06.180722 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:44:06.180742 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:44:06.180761 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:44:06.180780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:44:06.180810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:44:06.180833 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:44:06.180853 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:44:06.180875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:44:06.180895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:44:06.180916 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:44:06.180937 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:44:06.180958 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:44:06.180978 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:44:06.181006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:44:06.181027 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:44:06.181048 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:44:06.181070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:44:06.181096 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:44:06.181117 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:44:06.181235 systemd-journald[259]: Collecting audit messages is disabled. Sep 9 23:44:06.181328 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:44:06.181357 kernel: Bridge firewalling registered Sep 9 23:44:06.181398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:44:06.181421 systemd-journald[259]: Journal started Sep 9 23:44:06.181459 systemd-journald[259]: Runtime Journal (/run/log/journal/ec2bd9d8124df4cf31c087f3c9ded02d) is 8M, max 75.3M, 67.3M free. Sep 9 23:44:06.136308 systemd-modules-load[260]: Inserted module 'overlay' Sep 9 23:44:06.195130 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:44:06.177825 systemd-modules-load[260]: Inserted module 'br_netfilter' Sep 9 23:44:06.197865 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:44:06.204201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:44:06.216682 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:44:06.227308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:44:06.242396 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:44:06.255429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:44:06.261764 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:44:06.269496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:44:06.279414 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:44:06.288873 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:44:06.317763 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:44:06.330274 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:44:06.339443 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:44:06.375795 dracut-cmdline[301]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:44:06.404597 systemd-resolved[286]: Positive Trust Anchors: Sep 9 23:44:06.406230 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:44:06.406623 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:44:06.546193 kernel: SCSI subsystem initialized Sep 9 23:44:06.554191 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:44:06.568198 kernel: iscsi: registered transport (tcp) Sep 9 23:44:06.590687 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:44:06.590761 kernel: QLogic iSCSI HBA Driver Sep 9 23:44:06.627097 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:44:06.669215 kernel: random: crng init done Sep 9 23:44:06.669531 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 9 23:44:06.672968 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:44:06.679988 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:44:06.686409 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:44:06.692026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:44:06.779571 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:44:06.784891 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:44:06.888225 kernel: raid6: neonx8 gen() 6424 MB/s Sep 9 23:44:06.904214 kernel: raid6: neonx4 gen() 6396 MB/s Sep 9 23:44:06.922224 kernel: raid6: neonx2 gen() 5383 MB/s Sep 9 23:44:06.939207 kernel: raid6: neonx1 gen() 3909 MB/s Sep 9 23:44:06.957198 kernel: raid6: int64x8 gen() 3618 MB/s Sep 9 23:44:06.974207 kernel: raid6: int64x4 gen() 3684 MB/s Sep 9 23:44:06.991210 kernel: raid6: int64x2 gen() 3558 MB/s Sep 9 23:44:07.009246 kernel: raid6: int64x1 gen() 2758 MB/s Sep 9 23:44:07.009315 kernel: raid6: using algorithm neonx8 gen() 6424 MB/s Sep 9 23:44:07.028548 kernel: raid6: .... xor() 4728 MB/s, rmw enabled Sep 9 23:44:07.028621 kernel: raid6: using neon recovery algorithm Sep 9 23:44:07.037897 kernel: xor: measuring software checksum speed Sep 9 23:44:07.037976 kernel: 8regs : 12965 MB/sec Sep 9 23:44:07.039141 kernel: 32regs : 12878 MB/sec Sep 9 23:44:07.041514 kernel: arm64_neon : 8764 MB/sec Sep 9 23:44:07.041584 kernel: xor: using function: 8regs (12965 MB/sec) Sep 9 23:44:07.136201 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:44:07.146976 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:44:07.156108 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:44:07.207000 systemd-udevd[508]: Using default interface naming scheme 'v255'. Sep 9 23:44:07.217344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:44:07.235415 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:44:07.277666 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Sep 9 23:44:07.325258 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:44:07.331768 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:44:07.459467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:44:07.467139 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:44:07.611108 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:44:07.611202 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 9 23:44:07.624786 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 9 23:44:07.625175 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 9 23:44:07.625213 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 9 23:44:07.628766 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 9 23:44:07.643224 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 9 23:44:07.648215 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:9e:f3:c9:10:27 Sep 9 23:44:07.656595 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:44:07.656669 kernel: GPT:9289727 != 16777215 Sep 9 23:44:07.656695 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:44:07.661086 kernel: GPT:9289727 != 16777215 Sep 9 23:44:07.661198 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:44:07.662579 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:44:07.671110 (udev-worker)[559]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:44:07.677067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:44:07.678947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:44:07.685240 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:44:07.689809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:44:07.702184 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:44:07.727188 kernel: nvme nvme0: using unchecked data buffer Sep 9 23:44:07.755751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:44:07.911413 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 9 23:44:07.912130 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:44:07.935890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 23:44:07.961919 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 9 23:44:07.997127 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 9 23:44:08.001348 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 9 23:44:08.006936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:44:08.010135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:44:08.019962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:44:08.028877 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:44:08.037583 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:44:08.062698 disk-uuid[687]: Primary Header is updated. Sep 9 23:44:08.062698 disk-uuid[687]: Secondary Entries is updated. Sep 9 23:44:08.062698 disk-uuid[687]: Secondary Header is updated. Sep 9 23:44:08.074307 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:44:08.094184 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:44:09.100400 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 23:44:09.101323 disk-uuid[688]: The operation has completed successfully. Sep 9 23:44:09.303814 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:44:09.305873 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:44:09.368904 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:44:09.408771 sh[956]: Success Sep 9 23:44:09.433094 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:44:09.433186 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:44:09.435225 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:44:09.448270 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:44:09.538924 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:44:09.544453 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:44:09.571213 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:44:09.592235 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (979) Sep 9 23:44:09.593232 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:44:09.595759 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:44:09.717091 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 23:44:09.717194 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:44:09.717224 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:44:09.740454 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:44:09.742172 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:44:09.742550 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:44:09.743751 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:44:09.750474 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:44:09.821201 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Sep 9 23:44:09.826210 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:44:09.826281 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:44:09.844403 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:44:09.844474 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:44:09.854196 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:44:09.855553 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:44:09.860904 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:44:09.954335 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:44:09.963433 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:44:10.027468 systemd-networkd[1148]: lo: Link UP Sep 9 23:44:10.027489 systemd-networkd[1148]: lo: Gained carrier Sep 9 23:44:10.029861 systemd-networkd[1148]: Enumeration completed Sep 9 23:44:10.030718 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:44:10.030726 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:44:10.031306 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:44:10.033929 systemd[1]: Reached target network.target - Network. Sep 9 23:44:10.040874 systemd-networkd[1148]: eth0: Link UP Sep 9 23:44:10.040881 systemd-networkd[1148]: eth0: Gained carrier Sep 9 23:44:10.040904 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:44:10.074264 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.27.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 23:44:10.455426 ignition[1075]: Ignition 2.21.0 Sep 9 23:44:10.455935 ignition[1075]: Stage: fetch-offline Sep 9 23:44:10.456867 ignition[1075]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:44:10.456889 ignition[1075]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:44:10.458203 ignition[1075]: Ignition finished successfully Sep 9 23:44:10.466881 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:44:10.472984 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 23:44:10.512650 ignition[1159]: Ignition 2.21.0 Sep 9 23:44:10.513226 ignition[1159]: Stage: fetch Sep 9 23:44:10.513783 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:44:10.513806 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:44:10.514126 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:44:10.549722 ignition[1159]: PUT result: OK Sep 9 23:44:10.554466 ignition[1159]: parsed url from cmdline: "" Sep 9 23:44:10.554631 ignition[1159]: no config URL provided Sep 9 23:44:10.554652 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:44:10.554861 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:44:10.554947 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:44:10.564267 ignition[1159]: PUT result: OK Sep 9 23:44:10.564571 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 9 23:44:10.568828 ignition[1159]: GET result: OK Sep 9 23:44:10.569418 ignition[1159]: parsing config with SHA512: 62fc5e98d3e4cd884eb24e103a41f428bf632c82c8b4655b25327701aeac31e20c88155c619f683756b217549636e41c8b76b787112ac046bfedf1946349b4d1 Sep 9 23:44:10.581981 unknown[1159]: fetched base config from "system" Sep 9 23:44:10.582002 unknown[1159]: fetched base config from "system" Sep 9 23:44:10.582984 ignition[1159]: fetch: fetch complete Sep 9 23:44:10.582014 unknown[1159]: fetched user config from "aws" Sep 9 23:44:10.582997 ignition[1159]: fetch: fetch passed Sep 9 23:44:10.592470 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 23:44:10.583096 ignition[1159]: Ignition finished successfully Sep 9 23:44:10.601446 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:44:10.651126 ignition[1166]: Ignition 2.21.0 Sep 9 23:44:10.651689 ignition[1166]: Stage: kargs Sep 9 23:44:10.652635 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:44:10.652662 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:44:10.652816 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:44:10.664369 ignition[1166]: PUT result: OK Sep 9 23:44:10.673126 ignition[1166]: kargs: kargs passed Sep 9 23:44:10.674291 ignition[1166]: Ignition finished successfully Sep 9 23:44:10.681219 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:44:10.687359 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:44:10.733834 ignition[1173]: Ignition 2.21.0 Sep 9 23:44:10.733869 ignition[1173]: Stage: disks Sep 9 23:44:10.735192 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:44:10.735572 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:44:10.735755 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:44:10.739362 ignition[1173]: PUT result: OK Sep 9 23:44:10.754631 ignition[1173]: disks: disks passed Sep 9 23:44:10.754829 ignition[1173]: Ignition finished successfully Sep 9 23:44:10.757965 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:44:10.764491 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:44:10.767209 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:44:10.775412 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:44:10.777802 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:44:10.785006 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:44:10.788684 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:44:10.848353 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:44:10.852275 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:44:10.859753 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:44:10.994196 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:44:10.996649 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:44:10.997257 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:44:10.999492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:44:11.011002 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:44:11.014306 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:44:11.014392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:44:11.014441 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:44:11.046401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:44:11.051705 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:44:11.069210 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Sep 9 23:44:11.075908 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:44:11.075980 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:44:11.084632 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:44:11.084725 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:44:11.087015 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:44:11.389331 systemd-networkd[1148]: eth0: Gained IPv6LL Sep 9 23:44:11.422076 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:44:11.450649 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:44:11.460430 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:44:11.469319 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:44:11.764999 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:44:11.772262 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:44:11.778391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:44:11.812481 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:44:11.816315 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:44:11.844278 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:44:11.866237 ignition[1313]: INFO : Ignition 2.21.0 Sep 9 23:44:11.866237 ignition[1313]: INFO : Stage: mount Sep 9 23:44:11.870061 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:44:11.870061 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:44:11.870061 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:44:11.878331 ignition[1313]: INFO : PUT result: OK Sep 9 23:44:11.883556 ignition[1313]: INFO : mount: mount passed Sep 9 23:44:11.885608 ignition[1313]: INFO : Ignition finished successfully Sep 9 23:44:11.887889 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:44:11.899333 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:44:12.000467 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:44:12.039209 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Sep 9 23:44:12.043888 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:44:12.044134 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:44:12.051780 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 23:44:12.051866 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 23:44:12.055721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:44:12.099908 ignition[1342]: INFO : Ignition 2.21.0 Sep 9 23:44:12.099908 ignition[1342]: INFO : Stage: files Sep 9 23:44:12.103730 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:44:12.106190 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:44:12.109079 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:44:12.113059 ignition[1342]: INFO : PUT result: OK Sep 9 23:44:12.117824 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:44:12.128923 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:44:12.128923 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:44:12.140328 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:44:12.143767 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:44:12.147231 unknown[1342]: wrote ssh authorized keys file for user: core Sep 9 23:44:12.149909 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:44:12.168179 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:44:12.172781 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 23:44:12.233474 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:44:12.516942 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:44:12.516942 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:44:12.516942 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:44:12.735713 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:44:12.851895 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:44:12.851895 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:44:12.861113 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:44:12.861113 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:44:12.861113 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:44:12.861113 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:44:12.861113 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:44:12.861113 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:44:12.861113 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:44:12.888611 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:44:12.888611 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:44:12.888611 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:44:12.888611 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:44:12.888611 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:44:12.888611 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 23:44:13.142912 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:44:13.529698 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:44:13.529698 ignition[1342]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:44:13.538011 ignition[1342]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:44:13.545337 ignition[1342]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:44:13.545337 ignition[1342]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:44:13.545337 ignition[1342]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:44:13.556417 ignition[1342]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:44:13.556417 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:44:13.556417 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:44:13.556417 ignition[1342]: INFO : files: files passed Sep 9 23:44:13.556417 ignition[1342]: INFO : Ignition finished successfully Sep 9 23:44:13.563180 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:44:13.578380 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:44:13.583293 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:44:13.608363 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:44:13.610448 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:44:13.630336 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:44:13.630336 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:44:13.639261 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:44:13.646261 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:44:13.649598 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:44:13.656078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:44:13.757383 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:44:13.757721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:44:13.765280 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:44:13.772043 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:44:13.775235 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:44:13.780500 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:44:13.835249 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:44:13.837740 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:44:13.892747 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:44:13.898268 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:44:13.903638 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:44:13.907760 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:44:13.908609 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:44:13.915899 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:44:13.916315 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:44:13.922883 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:44:13.926857 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:44:13.930797 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:44:13.940128 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:44:13.943199 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:44:13.946605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:44:13.953745 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:44:13.958791 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:44:13.967641 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:44:13.972486 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:44:13.972759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:44:13.978094 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:44:13.983036 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:44:13.986621 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:44:13.991721 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:44:13.998397 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:44:14.000700 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:44:14.010815 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:44:14.011117 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:44:14.016079 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:44:14.016400 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:44:14.028770 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:44:14.033510 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:44:14.034085 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:44:14.047561 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:44:14.055193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:44:14.055523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:44:14.060589 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:44:14.060851 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:44:14.105366 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:44:14.106615 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:44:14.108250 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:44:14.122898 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:44:14.123177 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:44:14.135194 ignition[1396]: INFO : Ignition 2.21.0 Sep 9 23:44:14.135194 ignition[1396]: INFO : Stage: umount Sep 9 23:44:14.135194 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:44:14.135194 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 23:44:14.151285 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 23:44:14.151285 ignition[1396]: INFO : PUT result: OK Sep 9 23:44:14.159386 ignition[1396]: INFO : umount: umount passed Sep 9 23:44:14.164298 ignition[1396]: INFO : Ignition finished successfully Sep 9 23:44:14.163384 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:44:14.165302 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:44:14.171021 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:44:14.171137 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:44:14.171447 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:44:14.171563 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:44:14.173136 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 23:44:14.173273 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 23:44:14.187364 systemd[1]: Stopped target network.target - Network. Sep 9 23:44:14.189538 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:44:14.189661 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:44:14.193027 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:44:14.200643 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:44:14.203659 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:44:14.206581 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:44:14.210736 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:44:14.213923 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:44:14.214029 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:44:14.218532 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:44:14.218620 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:44:14.223693 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:44:14.223814 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:44:14.228317 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:44:14.228431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:44:14.231503 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:44:14.231627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:44:14.236272 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:44:14.238540 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:44:14.265568 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:44:14.272921 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:44:14.303523 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:44:14.304089 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:44:14.304374 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:44:14.311708 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:44:14.312994 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:44:14.316649 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:44:14.316747 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:44:14.324349 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:44:14.331364 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:44:14.331496 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:44:14.343400 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:44:14.343526 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:44:14.349538 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:44:14.349645 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:44:14.352444 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:44:14.352560 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:44:14.361138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:44:14.366030 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:44:14.366525 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:44:14.408631 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:44:14.410599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:44:14.417312 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:44:14.418313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:44:14.427052 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:44:14.427422 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:44:14.434927 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:44:14.435978 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:44:14.442987 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:44:14.443117 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:44:14.450353 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:44:14.450484 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:44:14.456652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:44:14.456792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:44:14.467095 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:44:14.475328 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:44:14.475672 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:44:14.485967 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:44:14.486078 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:44:14.496331 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:44:14.496462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:44:14.507933 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:44:14.508102 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:44:14.508450 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:44:14.533031 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:44:14.534105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:44:14.539578 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:44:14.547821 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:44:14.594591 systemd[1]: Switching root. Sep 9 23:44:14.659019 systemd-journald[259]: Journal stopped Sep 9 23:44:17.222800 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Sep 9 23:44:17.222931 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:44:17.222974 kernel: SELinux: policy capability open_perms=1 Sep 9 23:44:17.223013 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:44:17.223048 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:44:17.223078 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:44:17.223110 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:44:17.223139 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:44:17.223207 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:44:17.223240 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:44:17.223281 kernel: audit: type=1403 audit(1757461455.182:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:44:17.223320 systemd[1]: Successfully loaded SELinux policy in 99.984ms. Sep 9 23:44:17.223359 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.426ms. Sep 9 23:44:17.223396 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:44:17.223428 systemd[1]: Detected virtualization amazon. Sep 9 23:44:17.223459 systemd[1]: Detected architecture arm64. Sep 9 23:44:17.223489 systemd[1]: Detected first boot. Sep 9 23:44:17.223519 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:44:17.223550 zram_generator::config[1443]: No configuration found. Sep 9 23:44:17.223583 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:44:17.223614 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:44:17.223650 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:44:17.223683 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:44:17.223711 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:44:17.223744 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:44:17.223777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:44:17.227603 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:44:17.228751 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:44:17.229047 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:44:17.229082 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:44:17.229125 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:44:17.234547 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:44:17.234608 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:44:17.234645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:44:17.234681 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:44:17.234714 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:44:17.234748 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:44:17.234781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:44:17.234823 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:44:17.234861 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 23:44:17.234894 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:44:17.234925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:44:17.234958 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:44:17.234987 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:44:17.235019 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:44:17.235050 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:44:17.235085 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:44:17.238348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:44:17.238385 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:44:17.238417 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:44:17.238446 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:44:17.238475 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:44:17.238504 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:44:17.238534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:44:17.238562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:44:17.238595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:44:17.238634 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:44:17.238665 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:44:17.238693 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:44:17.238722 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:44:17.238750 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:44:17.238781 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:44:17.238811 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:44:17.238844 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:44:17.238877 systemd[1]: Reached target machines.target - Containers. Sep 9 23:44:17.238906 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:44:17.238938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:44:17.238968 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:44:17.238998 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:44:17.239029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:44:17.239059 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:44:17.239088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:44:17.239116 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:44:17.245218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:44:17.245298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:44:17.245333 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:44:17.245365 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:44:17.245396 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:44:17.245430 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:44:17.245462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:44:17.245492 kernel: fuse: init (API version 7.41) Sep 9 23:44:17.245535 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:44:17.245566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:44:17.245598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:44:17.245631 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:44:17.245661 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:44:17.245690 kernel: loop: module loaded Sep 9 23:44:17.245728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:44:17.245764 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:44:17.245796 systemd[1]: Stopped verity-setup.service. Sep 9 23:44:17.245828 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:44:17.245858 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:44:17.245895 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:44:17.245925 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:44:17.245959 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:44:17.245990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:44:17.246020 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:44:17.246053 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:44:17.246084 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:44:17.246114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:44:17.246188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:44:17.246230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:44:17.246261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:44:17.246290 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:44:17.246320 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:44:17.246349 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:44:17.246378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:44:17.246409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:44:17.246439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:44:17.246476 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:44:17.246506 kernel: ACPI: bus type drm_connector registered Sep 9 23:44:17.246536 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:44:17.246568 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:44:17.246598 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:44:17.246712 systemd-journald[1529]: Collecting audit messages is disabled. Sep 9 23:44:17.246784 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:44:17.246825 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:44:17.246857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:44:17.246888 systemd-journald[1529]: Journal started Sep 9 23:44:17.246941 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec2bd9d8124df4cf31c087f3c9ded02d) is 8M, max 75.3M, 67.3M free. Sep 9 23:44:16.531193 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:44:16.555868 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 9 23:44:16.556890 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:44:17.258657 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:44:17.258758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:44:17.273186 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:44:17.273295 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:44:17.284917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:44:17.296126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:44:17.305557 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:44:17.307949 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:44:17.314595 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:44:17.320139 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:44:17.324258 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:44:17.327551 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:44:17.330977 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:44:17.338576 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:44:17.342232 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:44:17.401992 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:44:17.406749 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:44:17.418560 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:44:17.424661 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:44:17.440542 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:44:17.454214 kernel: loop0: detected capacity change from 0 to 100608 Sep 9 23:44:17.470394 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec2bd9d8124df4cf31c087f3c9ded02d is 159.348ms for 939 entries. Sep 9 23:44:17.470394 systemd-journald[1529]: System Journal (/var/log/journal/ec2bd9d8124df4cf31c087f3c9ded02d) is 8M, max 195.6M, 187.6M free. Sep 9 23:44:17.640724 systemd-journald[1529]: Received client request to flush runtime journal. Sep 9 23:44:17.640810 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:44:17.640846 kernel: loop1: detected capacity change from 0 to 61256 Sep 9 23:44:17.507771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:44:17.600285 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:44:17.608435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:44:17.623138 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:44:17.625750 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:44:17.650270 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:44:17.681064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:44:17.710919 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Sep 9 23:44:17.710966 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Sep 9 23:44:17.721919 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:44:17.791225 kernel: loop2: detected capacity change from 0 to 119320 Sep 9 23:44:17.908442 kernel: loop3: detected capacity change from 0 to 207008 Sep 9 23:44:17.970209 kernel: loop4: detected capacity change from 0 to 100608 Sep 9 23:44:17.993200 kernel: loop5: detected capacity change from 0 to 61256 Sep 9 23:44:18.019207 kernel: loop6: detected capacity change from 0 to 119320 Sep 9 23:44:18.036263 kernel: loop7: detected capacity change from 0 to 207008 Sep 9 23:44:18.070233 (sd-merge)[1601]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 9 23:44:18.071320 (sd-merge)[1601]: Merged extensions into '/usr'. Sep 9 23:44:18.081707 systemd[1]: Reload requested from client PID 1558 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:44:18.081738 systemd[1]: Reloading... Sep 9 23:44:18.291224 zram_generator::config[1627]: No configuration found. Sep 9 23:44:18.597342 ldconfig[1549]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:44:18.788950 systemd[1]: Reloading finished in 706 ms. Sep 9 23:44:18.813411 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:44:18.820240 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:44:18.823836 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:44:18.839210 systemd[1]: Starting ensure-sysext.service... Sep 9 23:44:18.847383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:44:18.857449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:44:18.884952 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:44:18.884985 systemd[1]: Reloading... Sep 9 23:44:18.920432 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:44:18.920514 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:44:18.921248 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:44:18.921747 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:44:18.928406 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:44:18.928996 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Sep 9 23:44:18.929144 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Sep 9 23:44:18.946695 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Sep 9 23:44:18.950046 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:44:18.950071 systemd-tmpfiles[1681]: Skipping /boot Sep 9 23:44:18.987697 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:44:18.988321 systemd-tmpfiles[1681]: Skipping /boot Sep 9 23:44:19.108187 zram_generator::config[1720]: No configuration found. Sep 9 23:44:19.322976 (udev-worker)[1737]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:44:19.777596 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 23:44:19.778603 systemd[1]: Reloading finished in 893 ms. Sep 9 23:44:19.821591 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:44:19.825222 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:44:19.909694 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:44:19.915573 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:44:19.923560 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:44:19.934494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:44:19.947486 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:44:19.983799 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:44:20.041396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:44:20.046904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:44:20.059983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:44:20.068623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:44:20.071275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:44:20.071738 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:44:20.077643 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:44:20.088075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:44:20.089078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:44:20.090433 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:44:20.093239 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:44:20.118144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:44:20.121012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:44:20.123525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:44:20.123769 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:44:20.124127 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:44:20.137255 systemd[1]: Finished ensure-sysext.service. Sep 9 23:44:20.148267 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:44:20.154807 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:44:20.183631 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:44:20.202854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:44:20.204283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:44:20.262309 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:44:20.262695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:44:20.265427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:44:20.271727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:44:20.272220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:44:20.275686 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:44:20.283236 augenrules[1929]: No rules Sep 9 23:44:20.283266 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:44:20.286246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:44:20.289506 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:44:20.290382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:44:20.314730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:44:20.314928 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:44:20.321689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:44:20.391545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 23:44:20.406474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:44:20.468056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:44:20.514888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:44:20.522629 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:44:20.640182 systemd-networkd[1894]: lo: Link UP Sep 9 23:44:20.640196 systemd-networkd[1894]: lo: Gained carrier Sep 9 23:44:20.642861 systemd-networkd[1894]: Enumeration completed Sep 9 23:44:20.643038 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:44:20.646320 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:44:20.646342 systemd-networkd[1894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:44:20.650119 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:44:20.651563 systemd-resolved[1895]: Positive Trust Anchors: Sep 9 23:44:20.651589 systemd-resolved[1895]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:44:20.651656 systemd-resolved[1895]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:44:20.656828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:44:20.661808 systemd-networkd[1894]: eth0: Link UP Sep 9 23:44:20.662088 systemd-networkd[1894]: eth0: Gained carrier Sep 9 23:44:20.662139 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:44:20.672268 systemd-resolved[1895]: Defaulting to hostname 'linux'. Sep 9 23:44:20.673245 systemd-networkd[1894]: eth0: DHCPv4 address 172.31.27.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 23:44:20.677917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:44:20.684437 systemd[1]: Reached target network.target - Network. Sep 9 23:44:20.688814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:44:20.694471 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:44:20.698288 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:44:20.701390 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:44:20.704646 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:44:20.707483 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:44:20.710232 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:44:20.712915 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:44:20.712963 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:44:20.714960 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:44:20.718805 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:44:20.723902 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:44:20.729897 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:44:20.733349 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:44:20.736363 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:44:20.749480 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:44:20.752772 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:44:20.758272 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:44:20.761794 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:44:20.765338 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:44:20.767753 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:44:20.770562 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:44:20.770660 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:44:20.772673 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:44:20.779437 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 23:44:20.787478 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:44:20.797106 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:44:20.818666 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:44:20.826953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:44:20.830314 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:44:20.835557 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:44:20.844615 systemd[1]: Started ntpd.service - Network Time Service. Sep 9 23:44:20.857294 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:44:20.868930 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 9 23:44:20.882418 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:44:20.892698 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:44:20.908130 jq[1966]: false Sep 9 23:44:20.908984 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:44:20.917495 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:44:20.918470 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:44:20.930403 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:44:20.942791 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:44:20.958070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:44:20.962830 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:44:20.964432 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:44:20.977971 extend-filesystems[1967]: Found /dev/nvme0n1p6 Sep 9 23:44:21.017795 (ntainerd)[1995]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:44:21.033215 extend-filesystems[1967]: Found /dev/nvme0n1p9 Sep 9 23:44:21.029879 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:44:21.030421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:44:21.059305 extend-filesystems[1967]: Checking size of /dev/nvme0n1p9 Sep 9 23:44:21.079069 tar[1996]: linux-arm64/LICENSE Sep 9 23:44:21.087383 tar[1996]: linux-arm64/helm Sep 9 23:44:21.091844 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:44:21.092345 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:44:21.108468 jq[1985]: true Sep 9 23:44:21.168611 extend-filesystems[1967]: Resized partition /dev/nvme0n1p9 Sep 9 23:44:21.186337 extend-filesystems[2016]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:44:21.189859 dbus-daemon[1964]: [system] SELinux support is enabled Sep 9 23:44:21.190125 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:44:21.204069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:44:21.204120 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:44:21.210398 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:44:21.210449 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:44:21.230905 dbus-daemon[1964]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1894 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 23:44:21.238305 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 9 23:44:21.252733 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 23:44:21.260937 jq[2009]: true Sep 9 23:44:21.271678 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 23:44:21.284331 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 21:32:21 UTC 2025 (1): Starting Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 21:32:21 UTC 2025 (1): Starting Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: ---------------------------------------------------- Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: corporation. Support and training for ntp-4 are Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: available at https://www.nwtime.org/support Sep 9 23:44:21.288710 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: ---------------------------------------------------- Sep 9 23:44:21.284388 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 23:44:21.284407 ntpd[1971]: ---------------------------------------------------- Sep 9 23:44:21.296783 coreos-metadata[1963]: Sep 09 23:44:21.292 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 23:44:21.284424 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Sep 9 23:44:21.303457 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: proto: precision = 0.096 usec (-23) Sep 9 23:44:21.284441 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 23:44:21.284457 ntpd[1971]: corporation. Support and training for ntp-4 are Sep 9 23:44:21.284480 ntpd[1971]: available at https://www.nwtime.org/support Sep 9 23:44:21.284496 ntpd[1971]: ---------------------------------------------------- Sep 9 23:44:21.302409 ntpd[1971]: proto: precision = 0.096 usec (-23) Sep 9 23:44:21.314629 ntpd[1971]: basedate set to 2025-08-28 Sep 9 23:44:21.316880 coreos-metadata[1963]: Sep 09 23:44:21.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 9 23:44:21.317065 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: basedate set to 2025-08-28 Sep 9 23:44:21.317065 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: gps base set to 2025-08-31 (week 2382) Sep 9 23:44:21.314674 ntpd[1971]: gps base set to 2025-08-31 (week 2382) Sep 9 23:44:21.318610 coreos-metadata[1963]: Sep 09 23:44:21.318 INFO Fetch successful Sep 9 23:44:21.318610 coreos-metadata[1963]: Sep 09 23:44:21.318 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 9 23:44:21.325862 coreos-metadata[1963]: Sep 09 23:44:21.323 INFO Fetch successful Sep 9 23:44:21.325862 coreos-metadata[1963]: Sep 09 23:44:21.323 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 9 23:44:21.326686 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 23:44:21.326997 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 23:44:21.326997 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 23:44:21.326789 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 23:44:21.331358 coreos-metadata[1963]: Sep 09 23:44:21.328 INFO Fetch successful Sep 9 23:44:21.331358 coreos-metadata[1963]: Sep 09 23:44:21.328 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 9 23:44:21.331530 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 23:44:21.331530 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Listen normally on 3 eth0 172.31.27.236:123 Sep 9 23:44:21.331530 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Listen normally on 4 lo [::1]:123 Sep 9 23:44:21.331530 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: bind(21) AF_INET6 fe80::49e:f3ff:fec9:1027%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:44:21.331530 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: unable to create socket on eth0 (5) for fe80::49e:f3ff:fec9:1027%2#123 Sep 9 23:44:21.331530 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: failed to init interface for address fe80::49e:f3ff:fec9:1027%2 Sep 9 23:44:21.331530 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: Listening on routing socket on fd #21 for interface updates Sep 9 23:44:21.328691 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 23:44:21.328825 ntpd[1971]: Listen normally on 3 eth0 172.31.27.236:123 Sep 9 23:44:21.328920 ntpd[1971]: Listen normally on 4 lo [::1]:123 Sep 9 23:44:21.330089 ntpd[1971]: bind(21) AF_INET6 fe80::49e:f3ff:fec9:1027%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:44:21.330188 ntpd[1971]: unable to create socket on eth0 (5) for fe80::49e:f3ff:fec9:1027%2#123 Sep 9 23:44:21.330218 ntpd[1971]: failed to init interface for address fe80::49e:f3ff:fec9:1027%2 Sep 9 23:44:21.330913 ntpd[1971]: Listening on routing socket on fd #21 for interface updates Sep 9 23:44:21.341808 coreos-metadata[1963]: Sep 09 23:44:21.340 INFO Fetch successful Sep 9 23:44:21.341808 coreos-metadata[1963]: Sep 09 23:44:21.340 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 9 23:44:21.349197 coreos-metadata[1963]: Sep 09 23:44:21.342 INFO Fetch failed with 404: resource not found Sep 9 23:44:21.349197 coreos-metadata[1963]: Sep 09 23:44:21.342 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 9 23:44:21.349361 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:44:21.349361 ntpd[1971]: 9 Sep 23:44:21 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:44:21.348054 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:44:21.349560 update_engine[1982]: I20250909 23:44:21.344818 1982 main.cc:92] Flatcar Update Engine starting Sep 9 23:44:21.348105 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 23:44:21.359355 coreos-metadata[1963]: Sep 09 23:44:21.353 INFO Fetch successful Sep 9 23:44:21.359355 coreos-metadata[1963]: Sep 09 23:44:21.353 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 9 23:44:21.359675 coreos-metadata[1963]: Sep 09 23:44:21.359 INFO Fetch successful Sep 9 23:44:21.359675 coreos-metadata[1963]: Sep 09 23:44:21.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 9 23:44:21.367327 coreos-metadata[1963]: Sep 09 23:44:21.361 INFO Fetch successful Sep 9 23:44:21.367327 coreos-metadata[1963]: Sep 09 23:44:21.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 9 23:44:21.373097 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 9 23:44:21.373254 coreos-metadata[1963]: Sep 09 23:44:21.367 INFO Fetch successful Sep 9 23:44:21.373254 coreos-metadata[1963]: Sep 09 23:44:21.367 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 9 23:44:21.374341 coreos-metadata[1963]: Sep 09 23:44:21.373 INFO Fetch successful Sep 9 23:44:21.378725 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:44:21.379800 update_engine[1982]: I20250909 23:44:21.379691 1982 update_check_scheduler.cc:74] Next update check in 7m4s Sep 9 23:44:21.390527 extend-filesystems[2016]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 9 23:44:21.390527 extend-filesystems[2016]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:44:21.390527 extend-filesystems[2016]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 9 23:44:21.398585 extend-filesystems[1967]: Resized filesystem in /dev/nvme0n1p9 Sep 9 23:44:21.417401 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:44:21.422697 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:44:21.424065 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:44:21.431269 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 9 23:44:21.487328 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:44:21.593666 bash[2059]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:44:21.597298 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:44:21.607553 systemd[1]: Starting sshkeys.service... Sep 9 23:44:21.611238 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 23:44:21.616517 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:44:21.658714 systemd-logind[1977]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:44:21.658757 systemd-logind[1977]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 9 23:44:21.663272 systemd-logind[1977]: New seat seat0. Sep 9 23:44:21.673144 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:44:21.747698 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 23:44:21.757763 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 23:44:21.776436 locksmithd[2032]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:44:21.920255 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 23:44:21.925376 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 23:44:21.935035 dbus-daemon[1964]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2019 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 23:44:21.949694 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 23:44:22.035653 containerd[1995]: time="2025-09-09T23:44:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:44:22.040185 containerd[1995]: time="2025-09-09T23:44:22.037644346Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:44:22.121387 containerd[1995]: time="2025-09-09T23:44:22.121323922Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.436µs" Sep 9 23:44:22.121491 containerd[1995]: time="2025-09-09T23:44:22.121380178Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:44:22.121491 containerd[1995]: time="2025-09-09T23:44:22.121418194Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:44:22.121695 coreos-metadata[2107]: Sep 09 23:44:22.120 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 23:44:22.122102 containerd[1995]: time="2025-09-09T23:44:22.121701694Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:44:22.122102 containerd[1995]: time="2025-09-09T23:44:22.121758694Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:44:22.122102 containerd[1995]: time="2025-09-09T23:44:22.121816690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:44:22.122102 containerd[1995]: time="2025-09-09T23:44:22.121931794Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:44:22.122102 containerd[1995]: time="2025-09-09T23:44:22.121955986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:44:22.123345 coreos-metadata[2107]: Sep 09 23:44:22.123 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 9 23:44:22.124880 coreos-metadata[2107]: Sep 09 23:44:22.124 INFO Fetch successful Sep 9 23:44:22.124880 coreos-metadata[2107]: Sep 09 23:44:22.124 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 23:44:22.127787 coreos-metadata[2107]: Sep 09 23:44:22.125 INFO Fetch successful Sep 9 23:44:22.129252 containerd[1995]: time="2025-09-09T23:44:22.129117370Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:44:22.130247 containerd[1995]: time="2025-09-09T23:44:22.130063474Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:44:22.130542 containerd[1995]: time="2025-09-09T23:44:22.130119766Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:44:22.130542 containerd[1995]: time="2025-09-09T23:44:22.130355842Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:44:22.130756 unknown[2107]: wrote ssh authorized keys file for user: core Sep 9 23:44:22.138099 containerd[1995]: time="2025-09-09T23:44:22.134568886Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:44:22.138099 containerd[1995]: time="2025-09-09T23:44:22.135197350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:44:22.138099 containerd[1995]: time="2025-09-09T23:44:22.135274606Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:44:22.138099 containerd[1995]: time="2025-09-09T23:44:22.135302254Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:44:22.138099 containerd[1995]: time="2025-09-09T23:44:22.135371254Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:44:22.138099 containerd[1995]: time="2025-09-09T23:44:22.135751114Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:44:22.138099 containerd[1995]: time="2025-09-09T23:44:22.135874342Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:44:22.156992 containerd[1995]: time="2025-09-09T23:44:22.156861118Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:44:22.157096 containerd[1995]: time="2025-09-09T23:44:22.157026034Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:44:22.157096 containerd[1995]: time="2025-09-09T23:44:22.157063186Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:44:22.157203 containerd[1995]: time="2025-09-09T23:44:22.157116202Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:44:22.157203 containerd[1995]: time="2025-09-09T23:44:22.157176202Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:44:22.157316 containerd[1995]: time="2025-09-09T23:44:22.157209322Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:44:22.157316 containerd[1995]: time="2025-09-09T23:44:22.157269406Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:44:22.157316 containerd[1995]: time="2025-09-09T23:44:22.157301722Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:44:22.157433 containerd[1995]: time="2025-09-09T23:44:22.157376830Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:44:22.157477 containerd[1995]: time="2025-09-09T23:44:22.157407958Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:44:22.157477 containerd[1995]: time="2025-09-09T23:44:22.157461286Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:44:22.157555 containerd[1995]: time="2025-09-09T23:44:22.157493122Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.157997662Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158059750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158095234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158122426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158165734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158197126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158224174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158253118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158280970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158308498Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158335942Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158712442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158746150Z" level=info msg="Start snapshots syncer" Sep 9 23:44:22.159177 containerd[1995]: time="2025-09-09T23:44:22.158790562Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:44:22.167054 containerd[1995]: time="2025-09-09T23:44:22.166738126Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:44:22.167054 containerd[1995]: time="2025-09-09T23:44:22.166939618Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:44:22.169418 containerd[1995]: time="2025-09-09T23:44:22.169320862Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:44:22.172340 containerd[1995]: time="2025-09-09T23:44:22.172246810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178255234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178385950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178425598Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178482874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178512886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178567534Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178658710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178691290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178765858Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.178866886Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.179012878Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.179040358Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.179347306Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:44:22.181280 containerd[1995]: time="2025-09-09T23:44:22.179420170Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:44:22.181912 containerd[1995]: time="2025-09-09T23:44:22.179453614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:44:22.181912 containerd[1995]: time="2025-09-09T23:44:22.180277630Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:44:22.181912 containerd[1995]: time="2025-09-09T23:44:22.181202782Z" level=info msg="runtime interface created" Sep 9 23:44:22.181912 containerd[1995]: time="2025-09-09T23:44:22.181236130Z" level=info msg="created NRI interface" Sep 9 23:44:22.184196 containerd[1995]: time="2025-09-09T23:44:22.183201634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:44:22.189194 containerd[1995]: time="2025-09-09T23:44:22.186420274Z" level=info msg="Connect containerd service" Sep 9 23:44:22.189194 containerd[1995]: time="2025-09-09T23:44:22.186595438Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:44:22.207892 update-ssh-keys[2151]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:44:22.213748 containerd[1995]: time="2025-09-09T23:44:22.202612966Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:44:22.209178 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 23:44:22.222718 systemd[1]: Finished sshkeys.service. Sep 9 23:44:22.286512 ntpd[1971]: bind(24) AF_INET6 fe80::49e:f3ff:fec9:1027%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:44:22.288029 ntpd[1971]: 9 Sep 23:44:22 ntpd[1971]: bind(24) AF_INET6 fe80::49e:f3ff:fec9:1027%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 23:44:22.288029 ntpd[1971]: 9 Sep 23:44:22 ntpd[1971]: unable to create socket on eth0 (6) for fe80::49e:f3ff:fec9:1027%2#123 Sep 9 23:44:22.288029 ntpd[1971]: 9 Sep 23:44:22 ntpd[1971]: failed to init interface for address fe80::49e:f3ff:fec9:1027%2 Sep 9 23:44:22.286571 ntpd[1971]: unable to create socket on eth0 (6) for fe80::49e:f3ff:fec9:1027%2#123 Sep 9 23:44:22.286598 ntpd[1971]: failed to init interface for address fe80::49e:f3ff:fec9:1027%2 Sep 9 23:44:22.400498 systemd-networkd[1894]: eth0: Gained IPv6LL Sep 9 23:44:22.409815 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:44:22.413401 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:44:22.423752 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 9 23:44:22.436626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:22.443960 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:44:22.524204 containerd[1995]: time="2025-09-09T23:44:22.523802520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:44:22.524204 containerd[1995]: time="2025-09-09T23:44:22.523920804Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:44:22.524204 containerd[1995]: time="2025-09-09T23:44:22.523976916Z" level=info msg="Start subscribing containerd event" Sep 9 23:44:22.524204 containerd[1995]: time="2025-09-09T23:44:22.524019840Z" level=info msg="Start recovering state" Sep 9 23:44:22.542295 containerd[1995]: time="2025-09-09T23:44:22.524142252Z" level=info msg="Start event monitor" Sep 9 23:44:22.542295 containerd[1995]: time="2025-09-09T23:44:22.541216740Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:44:22.542295 containerd[1995]: time="2025-09-09T23:44:22.541247436Z" level=info msg="Start streaming server" Sep 9 23:44:22.542295 containerd[1995]: time="2025-09-09T23:44:22.541269348Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:44:22.542295 containerd[1995]: time="2025-09-09T23:44:22.541315728Z" level=info msg="runtime interface starting up..." Sep 9 23:44:22.542295 containerd[1995]: time="2025-09-09T23:44:22.541331916Z" level=info msg="starting plugins..." Sep 9 23:44:22.542295 containerd[1995]: time="2025-09-09T23:44:22.541393800Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:44:22.545728 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:44:22.552349 containerd[1995]: time="2025-09-09T23:44:22.545609484Z" level=info msg="containerd successfully booted in 0.512914s" Sep 9 23:44:22.562847 polkitd[2139]: Started polkitd version 126 Sep 9 23:44:22.603307 polkitd[2139]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 23:44:22.603988 polkitd[2139]: Loading rules from directory /run/polkit-1/rules.d Sep 9 23:44:22.604086 polkitd[2139]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 23:44:22.607065 polkitd[2139]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 23:44:22.607914 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:44:22.607901 polkitd[2139]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 23:44:22.608010 polkitd[2139]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 23:44:22.619961 polkitd[2139]: Finished loading, compiling and executing 2 rules Sep 9 23:44:22.620483 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 23:44:22.623056 dbus-daemon[1964]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 23:44:22.625844 polkitd[2139]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 23:44:22.685505 systemd-hostnamed[2019]: Hostname set to (transient) Sep 9 23:44:22.685674 systemd-resolved[1895]: System hostname changed to 'ip-172-31-27-236'. Sep 9 23:44:22.701079 amazon-ssm-agent[2173]: Initializing new seelog logger Sep 9 23:44:22.701079 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Sep 9 23:44:22.701600 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.701600 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.703813 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 processing appconfig overrides Sep 9 23:44:22.706793 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.706793 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.706793 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 processing appconfig overrides Sep 9 23:44:22.706793 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.706793 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.706793 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 processing appconfig overrides Sep 9 23:44:22.709173 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.7048 INFO Proxy environment variables: Sep 9 23:44:22.715033 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.715208 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:22.720201 amazon-ssm-agent[2173]: 2025/09/09 23:44:22 processing appconfig overrides Sep 9 23:44:22.810183 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.7048 INFO https_proxy: Sep 9 23:44:22.913172 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.7048 INFO http_proxy: Sep 9 23:44:23.011362 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.7049 INFO no_proxy: Sep 9 23:44:23.110960 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.7051 INFO Checking if agent identity type OnPrem can be assumed Sep 9 23:44:23.190667 tar[1996]: linux-arm64/README.md Sep 9 23:44:23.213290 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.7052 INFO Checking if agent identity type EC2 can be assumed Sep 9 23:44:23.231901 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:44:23.313169 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8572 INFO Agent will take identity from EC2 Sep 9 23:44:23.397209 amazon-ssm-agent[2173]: 2025/09/09 23:44:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:23.397209 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 23:44:23.397372 amazon-ssm-agent[2173]: 2025/09/09 23:44:23 processing appconfig overrides Sep 9 23:44:23.411581 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8589 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 9 23:44:23.441700 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8589 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 9 23:44:23.441700 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8589 INFO [amazon-ssm-agent] Starting Core Agent Sep 9 23:44:23.441700 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8589 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 9 23:44:23.441700 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8589 INFO [Registrar] Starting registrar module Sep 9 23:44:23.441700 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8649 INFO [EC2Identity] Checking disk for registration info Sep 9 23:44:23.441700 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8650 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:22.8650 INFO [EC2Identity] Generating registration keypair Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.3376 INFO [EC2Identity] Checking write access before registering Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.3404 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.3968 INFO [EC2Identity] EC2 registration was successful. Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.3968 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.3969 INFO [CredentialRefresher] credentialRefresher has started Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.3969 INFO [CredentialRefresher] Starting credentials refresher loop Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.4412 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 9 23:44:23.442275 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.4415 INFO [CredentialRefresher] Credentials ready Sep 9 23:44:23.509445 amazon-ssm-agent[2173]: 2025-09-09 23:44:23.4418 INFO [CredentialRefresher] Next credential rotation will be in 29.9999901637 minutes Sep 9 23:44:23.958488 sshd_keygen[2010]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:44:23.999337 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:44:24.005142 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:44:24.009304 systemd[1]: Started sshd@0-172.31.27.236:22-139.178.89.65:55032.service - OpenSSH per-connection server daemon (139.178.89.65:55032). Sep 9 23:44:24.061413 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:44:24.061855 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:44:24.068581 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:44:24.096665 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:44:24.106660 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:44:24.112667 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 23:44:24.115608 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:44:24.254505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:24.261059 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:44:24.264242 systemd[1]: Startup finished in 3.741s (kernel) + 9.455s (initrd) + 9.180s (userspace) = 22.378s. Sep 9 23:44:24.271048 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:44:24.275861 sshd[2217]: Accepted publickey for core from 139.178.89.65 port 55032 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:44:24.281284 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:24.311722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:44:24.314706 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:44:24.345798 systemd-logind[1977]: New session 1 of user core. Sep 9 23:44:24.366359 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:44:24.373626 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:44:24.394008 (systemd)[2239]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:44:24.402291 systemd-logind[1977]: New session c1 of user core. Sep 9 23:44:24.474096 amazon-ssm-agent[2173]: 2025-09-09 23:44:24.4739 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 9 23:44:24.577355 amazon-ssm-agent[2173]: 2025-09-09 23:44:24.4786 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2250) started Sep 9 23:44:24.678176 amazon-ssm-agent[2173]: 2025-09-09 23:44:24.4788 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 9 23:44:24.742709 systemd[2239]: Queued start job for default target default.target. Sep 9 23:44:24.750785 systemd[2239]: Created slice app.slice - User Application Slice. Sep 9 23:44:24.751245 systemd[2239]: Reached target paths.target - Paths. Sep 9 23:44:24.751337 systemd[2239]: Reached target timers.target - Timers. Sep 9 23:44:24.754782 systemd[2239]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:44:24.850880 systemd[2239]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:44:24.851136 systemd[2239]: Reached target sockets.target - Sockets. Sep 9 23:44:24.851259 systemd[2239]: Reached target basic.target - Basic System. Sep 9 23:44:24.851344 systemd[2239]: Reached target default.target - Main User Target. Sep 9 23:44:24.851404 systemd[2239]: Startup finished in 436ms. Sep 9 23:44:24.851615 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:44:24.866599 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:44:25.030603 systemd[1]: Started sshd@1-172.31.27.236:22-139.178.89.65:55036.service - OpenSSH per-connection server daemon (139.178.89.65:55036). Sep 9 23:44:25.235440 sshd[2268]: Accepted publickey for core from 139.178.89.65 port 55036 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:44:25.238747 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:25.249263 systemd-logind[1977]: New session 2 of user core. Sep 9 23:44:25.256443 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:44:25.285420 ntpd[1971]: Listen normally on 7 eth0 [fe80::49e:f3ff:fec9:1027%2]:123 Sep 9 23:44:25.286277 ntpd[1971]: 9 Sep 23:44:25 ntpd[1971]: Listen normally on 7 eth0 [fe80::49e:f3ff:fec9:1027%2]:123 Sep 9 23:44:25.385202 sshd[2272]: Connection closed by 139.178.89.65 port 55036 Sep 9 23:44:25.385908 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:25.394127 systemd[1]: sshd@1-172.31.27.236:22-139.178.89.65:55036.service: Deactivated successfully. Sep 9 23:44:25.399496 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:44:25.403602 systemd-logind[1977]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:44:25.411063 kubelet[2232]: E0909 23:44:25.410972 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:44:25.420851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:44:25.421172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:44:25.422939 systemd[1]: kubelet.service: Consumed 1.438s CPU time, 255.2M memory peak. Sep 9 23:44:25.429660 systemd[1]: Started sshd@2-172.31.27.236:22-139.178.89.65:55046.service - OpenSSH per-connection server daemon (139.178.89.65:55046). Sep 9 23:44:25.431407 systemd-logind[1977]: Removed session 2. Sep 9 23:44:25.615943 sshd[2279]: Accepted publickey for core from 139.178.89.65 port 55046 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:44:25.618320 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:25.626291 systemd-logind[1977]: New session 3 of user core. Sep 9 23:44:25.647387 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:44:25.765779 sshd[2282]: Connection closed by 139.178.89.65 port 55046 Sep 9 23:44:25.764751 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:25.771355 systemd[1]: sshd@2-172.31.27.236:22-139.178.89.65:55046.service: Deactivated successfully. Sep 9 23:44:25.774332 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:44:25.775880 systemd-logind[1977]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:44:25.778800 systemd-logind[1977]: Removed session 3. Sep 9 23:44:25.801963 systemd[1]: Started sshd@3-172.31.27.236:22-139.178.89.65:55054.service - OpenSSH per-connection server daemon (139.178.89.65:55054). Sep 9 23:44:26.004593 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 55054 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:44:26.007099 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:26.014959 systemd-logind[1977]: New session 4 of user core. Sep 9 23:44:26.023417 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:44:26.147538 sshd[2291]: Connection closed by 139.178.89.65 port 55054 Sep 9 23:44:26.148380 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:26.154876 systemd[1]: sshd@3-172.31.27.236:22-139.178.89.65:55054.service: Deactivated successfully. Sep 9 23:44:26.157851 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:44:26.159578 systemd-logind[1977]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:44:26.161949 systemd-logind[1977]: Removed session 4. Sep 9 23:44:26.186580 systemd[1]: Started sshd@4-172.31.27.236:22-139.178.89.65:55062.service - OpenSSH per-connection server daemon (139.178.89.65:55062). Sep 9 23:44:26.380216 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 55062 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:44:26.383093 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:26.390506 systemd-logind[1977]: New session 5 of user core. Sep 9 23:44:26.406423 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:44:26.527394 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:44:26.528033 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:44:26.545416 sudo[2301]: pam_unix(sudo:session): session closed for user root Sep 9 23:44:26.570134 sshd[2300]: Connection closed by 139.178.89.65 port 55062 Sep 9 23:44:26.568875 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:26.575534 systemd[1]: sshd@4-172.31.27.236:22-139.178.89.65:55062.service: Deactivated successfully. Sep 9 23:44:26.578341 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:44:26.579848 systemd-logind[1977]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:44:26.584285 systemd-logind[1977]: Removed session 5. Sep 9 23:44:26.604423 systemd[1]: Started sshd@5-172.31.27.236:22-139.178.89.65:55066.service - OpenSSH per-connection server daemon (139.178.89.65:55066). Sep 9 23:44:26.806090 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 55066 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:44:26.808559 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:26.816114 systemd-logind[1977]: New session 6 of user core. Sep 9 23:44:26.824364 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:44:26.930125 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:44:26.931301 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:44:26.941305 sudo[2312]: pam_unix(sudo:session): session closed for user root Sep 9 23:44:26.950617 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:44:26.951663 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:44:26.969719 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:44:27.029601 augenrules[2334]: No rules Sep 9 23:44:27.031828 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:44:27.032429 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:44:27.034953 sudo[2311]: pam_unix(sudo:session): session closed for user root Sep 9 23:44:27.059096 sshd[2310]: Connection closed by 139.178.89.65 port 55066 Sep 9 23:44:27.059819 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Sep 9 23:44:27.066954 systemd[1]: sshd@5-172.31.27.236:22-139.178.89.65:55066.service: Deactivated successfully. Sep 9 23:44:27.070319 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:44:27.073365 systemd-logind[1977]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:44:27.076535 systemd-logind[1977]: Removed session 6. Sep 9 23:44:27.098009 systemd[1]: Started sshd@6-172.31.27.236:22-139.178.89.65:55078.service - OpenSSH per-connection server daemon (139.178.89.65:55078). Sep 9 23:44:27.296266 sshd[2343]: Accepted publickey for core from 139.178.89.65 port 55078 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:44:27.298580 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:44:27.306533 systemd-logind[1977]: New session 7 of user core. Sep 9 23:44:27.318409 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:44:27.420489 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:44:27.421094 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:44:27.934786 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:44:27.948649 (dockerd)[2366]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:44:27.816998 systemd-resolved[1895]: Clock change detected. Flushing caches. Sep 9 23:44:27.829683 systemd-journald[1529]: Time jumped backwards, rotating. Sep 9 23:44:27.865481 dockerd[2366]: time="2025-09-09T23:44:27.865385556Z" level=info msg="Starting up" Sep 9 23:44:27.868228 dockerd[2366]: time="2025-09-09T23:44:27.868186380Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:44:27.888672 dockerd[2366]: time="2025-09-09T23:44:27.888617052Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:44:28.049790 dockerd[2366]: time="2025-09-09T23:44:28.049741077Z" level=info msg="Loading containers: start." Sep 9 23:44:28.064931 kernel: Initializing XFRM netlink socket Sep 9 23:44:28.385767 (udev-worker)[2388]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:44:28.463168 systemd-networkd[1894]: docker0: Link UP Sep 9 23:44:28.468006 dockerd[2366]: time="2025-09-09T23:44:28.467813735Z" level=info msg="Loading containers: done." Sep 9 23:44:28.493372 dockerd[2366]: time="2025-09-09T23:44:28.493296515Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:44:28.493522 dockerd[2366]: time="2025-09-09T23:44:28.493428359Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:44:28.493605 dockerd[2366]: time="2025-09-09T23:44:28.493567559Z" level=info msg="Initializing buildkit" Sep 9 23:44:28.532275 dockerd[2366]: time="2025-09-09T23:44:28.532216775Z" level=info msg="Completed buildkit initialization" Sep 9 23:44:28.549773 dockerd[2366]: time="2025-09-09T23:44:28.549683507Z" level=info msg="Daemon has completed initialization" Sep 9 23:44:28.550191 dockerd[2366]: time="2025-09-09T23:44:28.550004735Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:44:28.550749 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:44:29.693355 containerd[1995]: time="2025-09-09T23:44:29.693187177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 23:44:30.264944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877471484.mount: Deactivated successfully. Sep 9 23:44:31.627505 containerd[1995]: time="2025-09-09T23:44:31.626283230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:31.628473 containerd[1995]: time="2025-09-09T23:44:31.628411082Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 9 23:44:31.628866 containerd[1995]: time="2025-09-09T23:44:31.628829666Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:31.634730 containerd[1995]: time="2025-09-09T23:44:31.634677002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:31.638440 containerd[1995]: time="2025-09-09T23:44:31.638395838Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.945150761s" Sep 9 23:44:31.638628 containerd[1995]: time="2025-09-09T23:44:31.638599130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 23:44:31.639535 containerd[1995]: time="2025-09-09T23:44:31.639490262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 23:44:33.028922 containerd[1995]: time="2025-09-09T23:44:33.028690885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:33.030918 containerd[1995]: time="2025-09-09T23:44:33.030839209Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 9 23:44:33.031926 containerd[1995]: time="2025-09-09T23:44:33.031349617Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:33.041906 containerd[1995]: time="2025-09-09T23:44:33.041806609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:33.046397 containerd[1995]: time="2025-09-09T23:44:33.046321981Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.406774371s" Sep 9 23:44:33.046397 containerd[1995]: time="2025-09-09T23:44:33.046391329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 23:44:33.047311 containerd[1995]: time="2025-09-09T23:44:33.047257561Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 23:44:34.230869 containerd[1995]: time="2025-09-09T23:44:34.230802975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:34.232482 containerd[1995]: time="2025-09-09T23:44:34.232429419Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 9 23:44:34.233932 containerd[1995]: time="2025-09-09T23:44:34.233274579Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:34.237767 containerd[1995]: time="2025-09-09T23:44:34.237717939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:34.240072 containerd[1995]: time="2025-09-09T23:44:34.239674935Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.192359834s" Sep 9 23:44:34.240072 containerd[1995]: time="2025-09-09T23:44:34.239724267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 23:44:34.240669 containerd[1995]: time="2025-09-09T23:44:34.240589503Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 23:44:35.061980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:44:35.066732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:35.562820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:35.580428 (kubelet)[2652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:44:35.585706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70367252.mount: Deactivated successfully. Sep 9 23:44:35.689064 kubelet[2652]: E0909 23:44:35.688984 2652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:44:35.700197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:44:35.700508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:44:35.702011 systemd[1]: kubelet.service: Consumed 331ms CPU time, 106.3M memory peak. Sep 9 23:44:36.184404 containerd[1995]: time="2025-09-09T23:44:36.184339001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:36.185715 containerd[1995]: time="2025-09-09T23:44:36.185524949Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 9 23:44:36.186678 containerd[1995]: time="2025-09-09T23:44:36.186622205Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:36.189939 containerd[1995]: time="2025-09-09T23:44:36.189861089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:36.191501 containerd[1995]: time="2025-09-09T23:44:36.191171465Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.95050965s" Sep 9 23:44:36.191501 containerd[1995]: time="2025-09-09T23:44:36.191227409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 23:44:36.191954 containerd[1995]: time="2025-09-09T23:44:36.191905049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:44:36.652626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396221254.mount: Deactivated successfully. Sep 9 23:44:37.740979 containerd[1995]: time="2025-09-09T23:44:37.740276541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:37.742197 containerd[1995]: time="2025-09-09T23:44:37.742142037Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 9 23:44:37.743920 containerd[1995]: time="2025-09-09T23:44:37.743203029Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:37.750815 containerd[1995]: time="2025-09-09T23:44:37.750758733Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.558795664s" Sep 9 23:44:37.751006 containerd[1995]: time="2025-09-09T23:44:37.750977085Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:44:37.751208 containerd[1995]: time="2025-09-09T23:44:37.750955797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:37.751807 containerd[1995]: time="2025-09-09T23:44:37.751749381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:44:38.194493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712297628.mount: Deactivated successfully. Sep 9 23:44:38.202930 containerd[1995]: time="2025-09-09T23:44:38.202726099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:44:38.205180 containerd[1995]: time="2025-09-09T23:44:38.205138003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 23:44:38.206062 containerd[1995]: time="2025-09-09T23:44:38.205993195Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:44:38.210829 containerd[1995]: time="2025-09-09T23:44:38.210746467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:44:38.213343 containerd[1995]: time="2025-09-09T23:44:38.213232975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 461.270354ms" Sep 9 23:44:38.213343 containerd[1995]: time="2025-09-09T23:44:38.213289471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:44:38.214226 containerd[1995]: time="2025-09-09T23:44:38.214184119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 23:44:38.739954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226846905.mount: Deactivated successfully. Sep 9 23:44:40.771229 containerd[1995]: time="2025-09-09T23:44:40.771173076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:40.773702 containerd[1995]: time="2025-09-09T23:44:40.773645940Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 9 23:44:40.774270 containerd[1995]: time="2025-09-09T23:44:40.774202560Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:40.780777 containerd[1995]: time="2025-09-09T23:44:40.780690024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:44:40.783334 containerd[1995]: time="2025-09-09T23:44:40.782970972Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.568735277s" Sep 9 23:44:40.783334 containerd[1995]: time="2025-09-09T23:44:40.783044484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 23:44:45.812038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:44:45.816187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:46.155143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:46.168371 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:44:46.259506 kubelet[2801]: E0909 23:44:46.259440 2801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:44:46.264541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:44:46.265065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:44:46.266022 systemd[1]: kubelet.service: Consumed 295ms CPU time, 108M memory peak. Sep 9 23:44:46.416492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:46.416860 systemd[1]: kubelet.service: Consumed 295ms CPU time, 108M memory peak. Sep 9 23:44:46.420806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:46.472063 systemd[1]: Reload requested from client PID 2815 ('systemctl') (unit session-7.scope)... Sep 9 23:44:46.472100 systemd[1]: Reloading... Sep 9 23:44:46.720939 zram_generator::config[2863]: No configuration found. Sep 9 23:44:47.167315 systemd[1]: Reloading finished in 694 ms. Sep 9 23:44:47.266697 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:44:47.266874 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:44:47.268341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:47.268425 systemd[1]: kubelet.service: Consumed 215ms CPU time, 95M memory peak. Sep 9 23:44:47.273845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:47.595387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:47.609709 (kubelet)[2924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:44:47.685266 kubelet[2924]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:47.685266 kubelet[2924]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:44:47.685266 kubelet[2924]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:47.685771 kubelet[2924]: I0909 23:44:47.685404 2924 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:44:49.239634 kubelet[2924]: I0909 23:44:49.239574 2924 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:44:49.239634 kubelet[2924]: I0909 23:44:49.239628 2924 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:44:49.240445 kubelet[2924]: I0909 23:44:49.240148 2924 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:44:49.294944 kubelet[2924]: E0909 23:44:49.294841 2924 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:49.301970 kubelet[2924]: I0909 23:44:49.301468 2924 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:44:49.314200 kubelet[2924]: I0909 23:44:49.313337 2924 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:44:49.326641 kubelet[2924]: I0909 23:44:49.326577 2924 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:44:49.327251 kubelet[2924]: I0909 23:44:49.327181 2924 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:44:49.327551 kubelet[2924]: I0909 23:44:49.327240 2924 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-236","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:44:49.327737 kubelet[2924]: I0909 23:44:49.327691 2924 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:44:49.327737 kubelet[2924]: I0909 23:44:49.327713 2924 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:44:49.328122 kubelet[2924]: I0909 23:44:49.328079 2924 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:49.335166 kubelet[2924]: I0909 23:44:49.335107 2924 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:44:49.335326 kubelet[2924]: I0909 23:44:49.335287 2924 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:44:49.335392 kubelet[2924]: I0909 23:44:49.335351 2924 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:44:49.335392 kubelet[2924]: I0909 23:44:49.335380 2924 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:44:49.338868 kubelet[2924]: W0909 23:44:49.338769 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-236&limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:49.339056 kubelet[2924]: E0909 23:44:49.338872 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-236&limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:49.343014 kubelet[2924]: W0909 23:44:49.342928 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:49.343914 kubelet[2924]: E0909 23:44:49.343340 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:49.343914 kubelet[2924]: I0909 23:44:49.343486 2924 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:44:49.344819 kubelet[2924]: I0909 23:44:49.344791 2924 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:44:49.345223 kubelet[2924]: W0909 23:44:49.345204 2924 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:44:49.348512 kubelet[2924]: I0909 23:44:49.348027 2924 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:44:49.348512 kubelet[2924]: I0909 23:44:49.348083 2924 server.go:1287] "Started kubelet" Sep 9 23:44:49.354861 kubelet[2924]: E0909 23:44:49.354369 2924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.236:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.236:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-236.1863c1e9c305111e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-236,UID:ip-172-31-27-236,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-236,},FirstTimestamp:2025-09-09 23:44:49.348055326 +0000 UTC m=+1.732309737,LastTimestamp:2025-09-09 23:44:49.348055326 +0000 UTC m=+1.732309737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-236,}" Sep 9 23:44:49.355097 kubelet[2924]: I0909 23:44:49.354952 2924 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:44:49.357761 kubelet[2924]: I0909 23:44:49.355477 2924 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:44:49.357761 kubelet[2924]: I0909 23:44:49.355595 2924 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:44:49.357761 kubelet[2924]: I0909 23:44:49.357072 2924 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:44:49.358965 kubelet[2924]: I0909 23:44:49.358930 2924 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:44:49.361848 kubelet[2924]: I0909 23:44:49.361795 2924 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:44:49.362522 kubelet[2924]: I0909 23:44:49.362468 2924 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:44:49.363561 kubelet[2924]: E0909 23:44:49.363323 2924 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-236\" not found" Sep 9 23:44:49.366672 kubelet[2924]: I0909 23:44:49.366634 2924 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:44:49.366808 kubelet[2924]: I0909 23:44:49.366762 2924 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:44:49.369545 kubelet[2924]: W0909 23:44:49.369139 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:49.369545 kubelet[2924]: E0909 23:44:49.369233 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:49.372721 kubelet[2924]: E0909 23:44:49.372424 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-236?timeout=10s\": dial tcp 172.31.27.236:6443: connect: connection refused" interval="200ms" Sep 9 23:44:49.378663 kubelet[2924]: I0909 23:44:49.378622 2924 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:44:49.378828 kubelet[2924]: I0909 23:44:49.378798 2924 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:44:49.379106 kubelet[2924]: I0909 23:44:49.379072 2924 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:44:49.414403 kubelet[2924]: I0909 23:44:49.414347 2924 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:44:49.414515 kubelet[2924]: I0909 23:44:49.414415 2924 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:44:49.414515 kubelet[2924]: I0909 23:44:49.414448 2924 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:49.417460 kubelet[2924]: I0909 23:44:49.417344 2924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:44:49.419372 kubelet[2924]: I0909 23:44:49.419318 2924 policy_none.go:49] "None policy: Start" Sep 9 23:44:49.419988 kubelet[2924]: I0909 23:44:49.419524 2924 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:44:49.419988 kubelet[2924]: I0909 23:44:49.419559 2924 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:44:49.420236 kubelet[2924]: I0909 23:44:49.420205 2924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:44:49.420344 kubelet[2924]: I0909 23:44:49.420325 2924 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:44:49.420454 kubelet[2924]: I0909 23:44:49.420434 2924 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:44:49.420548 kubelet[2924]: I0909 23:44:49.420531 2924 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:44:49.420785 kubelet[2924]: E0909 23:44:49.420691 2924 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:44:49.425143 kubelet[2924]: W0909 23:44:49.425100 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:49.425372 kubelet[2924]: E0909 23:44:49.425340 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:49.437211 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:44:49.456353 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:44:49.463494 kubelet[2924]: E0909 23:44:49.463443 2924 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-236\" not found" Sep 9 23:44:49.464117 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:44:49.485240 kubelet[2924]: I0909 23:44:49.485180 2924 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:44:49.485524 kubelet[2924]: I0909 23:44:49.485480 2924 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:44:49.485586 kubelet[2924]: I0909 23:44:49.485513 2924 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:44:49.486487 kubelet[2924]: I0909 23:44:49.486316 2924 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:44:49.491459 kubelet[2924]: E0909 23:44:49.489534 2924 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:44:49.491459 kubelet[2924]: E0909 23:44:49.489759 2924 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-236\" not found" Sep 9 23:44:49.540189 systemd[1]: Created slice kubepods-burstable-pod2e608fce363683622ff37cc6ab63b494.slice - libcontainer container kubepods-burstable-pod2e608fce363683622ff37cc6ab63b494.slice. Sep 9 23:44:49.556296 kubelet[2924]: E0909 23:44:49.556193 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:49.562704 systemd[1]: Created slice kubepods-burstable-poddf9faacfa8612280f8e0066f51abcefe.slice - libcontainer container kubepods-burstable-poddf9faacfa8612280f8e0066f51abcefe.slice. Sep 9 23:44:49.567969 kubelet[2924]: I0909 23:44:49.567879 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:49.568439 kubelet[2924]: I0909 23:44:49.568393 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e608fce363683622ff37cc6ab63b494-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-236\" (UID: \"2e608fce363683622ff37cc6ab63b494\") " pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:49.568504 kubelet[2924]: I0909 23:44:49.568456 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:49.568504 kubelet[2924]: I0909 23:44:49.568492 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:49.568602 kubelet[2924]: I0909 23:44:49.568529 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:49.568602 kubelet[2924]: I0909 23:44:49.568565 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:49.568703 kubelet[2924]: I0909 23:44:49.568600 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a41421f31629ace7a1d17d58bca76db-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-236\" (UID: \"0a41421f31629ace7a1d17d58bca76db\") " pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:49.568703 kubelet[2924]: I0909 23:44:49.568643 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e608fce363683622ff37cc6ab63b494-ca-certs\") pod \"kube-apiserver-ip-172-31-27-236\" (UID: \"2e608fce363683622ff37cc6ab63b494\") " pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:49.568703 kubelet[2924]: I0909 23:44:49.568675 2924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e608fce363683622ff37cc6ab63b494-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-236\" (UID: \"2e608fce363683622ff37cc6ab63b494\") " pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:49.569660 kubelet[2924]: E0909 23:44:49.569179 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:49.573556 kubelet[2924]: E0909 23:44:49.573491 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-236?timeout=10s\": dial tcp 172.31.27.236:6443: connect: connection refused" interval="400ms" Sep 9 23:44:49.575163 systemd[1]: Created slice kubepods-burstable-pod0a41421f31629ace7a1d17d58bca76db.slice - libcontainer container kubepods-burstable-pod0a41421f31629ace7a1d17d58bca76db.slice. Sep 9 23:44:49.578951 kubelet[2924]: E0909 23:44:49.578872 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:49.588574 kubelet[2924]: I0909 23:44:49.588540 2924 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-236" Sep 9 23:44:49.589650 kubelet[2924]: E0909 23:44:49.589601 2924 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.236:6443/api/v1/nodes\": dial tcp 172.31.27.236:6443: connect: connection refused" node="ip-172-31-27-236" Sep 9 23:44:49.792384 kubelet[2924]: I0909 23:44:49.792263 2924 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-236" Sep 9 23:44:49.792761 kubelet[2924]: E0909 23:44:49.792713 2924 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.236:6443/api/v1/nodes\": dial tcp 172.31.27.236:6443: connect: connection refused" node="ip-172-31-27-236" Sep 9 23:44:49.859751 containerd[1995]: time="2025-09-09T23:44:49.859605645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-236,Uid:2e608fce363683622ff37cc6ab63b494,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:49.871090 containerd[1995]: time="2025-09-09T23:44:49.870687933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-236,Uid:df9faacfa8612280f8e0066f51abcefe,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:49.880389 containerd[1995]: time="2025-09-09T23:44:49.880301169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-236,Uid:0a41421f31629ace7a1d17d58bca76db,Namespace:kube-system,Attempt:0,}" Sep 9 23:44:49.921601 containerd[1995]: time="2025-09-09T23:44:49.921402441Z" level=info msg="connecting to shim 6fbac34bc7f0dd04fcb79e30f2aa862d409360c8dbe39adbdd627c915dcde46f" address="unix:///run/containerd/s/1ae668586f6cbd75f6b12b538ef2696280491a0ff52083bc3b3a391535e42c91" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:49.970676 containerd[1995]: time="2025-09-09T23:44:49.970417954Z" level=info msg="connecting to shim 08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506" address="unix:///run/containerd/s/192ae742818d0fdc991d9b384f371a93766cca7ff84ca056a9ead5cf81628124" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:49.974907 kubelet[2924]: E0909 23:44:49.974832 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-236?timeout=10s\": dial tcp 172.31.27.236:6443: connect: connection refused" interval="800ms" Sep 9 23:44:50.015700 containerd[1995]: time="2025-09-09T23:44:50.015567918Z" level=info msg="connecting to shim a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148" address="unix:///run/containerd/s/a75be31f2d12d7d7410d3a522fbdda4a533959673b80a74894fa2172997053ad" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:44:50.016298 systemd[1]: Started cri-containerd-6fbac34bc7f0dd04fcb79e30f2aa862d409360c8dbe39adbdd627c915dcde46f.scope - libcontainer container 6fbac34bc7f0dd04fcb79e30f2aa862d409360c8dbe39adbdd627c915dcde46f. Sep 9 23:44:50.059215 systemd[1]: Started cri-containerd-08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506.scope - libcontainer container 08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506. Sep 9 23:44:50.096207 systemd[1]: Started cri-containerd-a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148.scope - libcontainer container a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148. Sep 9 23:44:50.162176 containerd[1995]: time="2025-09-09T23:44:50.162079578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-236,Uid:2e608fce363683622ff37cc6ab63b494,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fbac34bc7f0dd04fcb79e30f2aa862d409360c8dbe39adbdd627c915dcde46f\"" Sep 9 23:44:50.171345 containerd[1995]: time="2025-09-09T23:44:50.171278647Z" level=info msg="CreateContainer within sandbox \"6fbac34bc7f0dd04fcb79e30f2aa862d409360c8dbe39adbdd627c915dcde46f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:44:50.189286 containerd[1995]: time="2025-09-09T23:44:50.189220939Z" level=info msg="Container 5a9a844535a7491c3798d0dddd6d858b60b1ccd71df71307502186c64b626195: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:50.200587 kubelet[2924]: I0909 23:44:50.200330 2924 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-236" Sep 9 23:44:50.201816 kubelet[2924]: E0909 23:44:50.201343 2924 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.236:6443/api/v1/nodes\": dial tcp 172.31.27.236:6443: connect: connection refused" node="ip-172-31-27-236" Sep 9 23:44:50.214508 containerd[1995]: time="2025-09-09T23:44:50.214426459Z" level=info msg="CreateContainer within sandbox \"6fbac34bc7f0dd04fcb79e30f2aa862d409360c8dbe39adbdd627c915dcde46f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a9a844535a7491c3798d0dddd6d858b60b1ccd71df71307502186c64b626195\"" Sep 9 23:44:50.216941 containerd[1995]: time="2025-09-09T23:44:50.216778003Z" level=info msg="StartContainer for \"5a9a844535a7491c3798d0dddd6d858b60b1ccd71df71307502186c64b626195\"" Sep 9 23:44:50.221762 kubelet[2924]: W0909 23:44:50.221649 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:50.221952 kubelet[2924]: E0909 23:44:50.221798 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:50.228301 containerd[1995]: time="2025-09-09T23:44:50.228191035Z" level=info msg="connecting to shim 5a9a844535a7491c3798d0dddd6d858b60b1ccd71df71307502186c64b626195" address="unix:///run/containerd/s/1ae668586f6cbd75f6b12b538ef2696280491a0ff52083bc3b3a391535e42c91" protocol=ttrpc version=3 Sep 9 23:44:50.239524 containerd[1995]: time="2025-09-09T23:44:50.239366431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-236,Uid:df9faacfa8612280f8e0066f51abcefe,Namespace:kube-system,Attempt:0,} returns sandbox id \"08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506\"" Sep 9 23:44:50.248465 containerd[1995]: time="2025-09-09T23:44:50.248377075Z" level=info msg="CreateContainer within sandbox \"08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:44:50.265602 kubelet[2924]: W0909 23:44:50.265519 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:50.266426 kubelet[2924]: E0909 23:44:50.265621 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:50.273797 containerd[1995]: time="2025-09-09T23:44:50.273670279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-236,Uid:0a41421f31629ace7a1d17d58bca76db,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148\"" Sep 9 23:44:50.274643 containerd[1995]: time="2025-09-09T23:44:50.274491211Z" level=info msg="Container 43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:50.286253 containerd[1995]: time="2025-09-09T23:44:50.286190095Z" level=info msg="CreateContainer within sandbox \"a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:44:50.290214 systemd[1]: Started cri-containerd-5a9a844535a7491c3798d0dddd6d858b60b1ccd71df71307502186c64b626195.scope - libcontainer container 5a9a844535a7491c3798d0dddd6d858b60b1ccd71df71307502186c64b626195. Sep 9 23:44:50.298729 containerd[1995]: time="2025-09-09T23:44:50.298509751Z" level=info msg="CreateContainer within sandbox \"08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317\"" Sep 9 23:44:50.300278 containerd[1995]: time="2025-09-09T23:44:50.300187015Z" level=info msg="StartContainer for \"43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317\"" Sep 9 23:44:50.305821 containerd[1995]: time="2025-09-09T23:44:50.305533159Z" level=info msg="connecting to shim 43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317" address="unix:///run/containerd/s/192ae742818d0fdc991d9b384f371a93766cca7ff84ca056a9ead5cf81628124" protocol=ttrpc version=3 Sep 9 23:44:50.308838 containerd[1995]: time="2025-09-09T23:44:50.308768851Z" level=info msg="Container 3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:44:50.328360 kubelet[2924]: W0909 23:44:50.328302 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:50.328606 kubelet[2924]: E0909 23:44:50.328375 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:50.344457 containerd[1995]: time="2025-09-09T23:44:50.344394163Z" level=info msg="CreateContainer within sandbox \"a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996\"" Sep 9 23:44:50.355473 containerd[1995]: time="2025-09-09T23:44:50.355389223Z" level=info msg="StartContainer for \"3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996\"" Sep 9 23:44:50.358820 containerd[1995]: time="2025-09-09T23:44:50.358695727Z" level=info msg="connecting to shim 3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996" address="unix:///run/containerd/s/a75be31f2d12d7d7410d3a522fbdda4a533959673b80a74894fa2172997053ad" protocol=ttrpc version=3 Sep 9 23:44:50.362196 systemd[1]: Started cri-containerd-43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317.scope - libcontainer container 43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317. Sep 9 23:44:50.404112 kubelet[2924]: W0909 23:44:50.404017 2924 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-236&limit=500&resourceVersion=0": dial tcp 172.31.27.236:6443: connect: connection refused Sep 9 23:44:50.404654 kubelet[2924]: E0909 23:44:50.404320 2924 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-236&limit=500&resourceVersion=0\": dial tcp 172.31.27.236:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:44:50.414346 systemd[1]: Started cri-containerd-3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996.scope - libcontainer container 3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996. Sep 9 23:44:50.494794 containerd[1995]: time="2025-09-09T23:44:50.494589848Z" level=info msg="StartContainer for \"5a9a844535a7491c3798d0dddd6d858b60b1ccd71df71307502186c64b626195\" returns successfully" Sep 9 23:44:50.527996 containerd[1995]: time="2025-09-09T23:44:50.527936360Z" level=info msg="StartContainer for \"43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317\" returns successfully" Sep 9 23:44:50.642957 containerd[1995]: time="2025-09-09T23:44:50.642675969Z" level=info msg="StartContainer for \"3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996\" returns successfully" Sep 9 23:44:51.005997 kubelet[2924]: I0909 23:44:51.003859 2924 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-236" Sep 9 23:44:51.477061 kubelet[2924]: E0909 23:44:51.477010 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:51.487968 kubelet[2924]: E0909 23:44:51.487908 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:51.494606 kubelet[2924]: E0909 23:44:51.494560 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:52.252776 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 23:44:52.496921 kubelet[2924]: E0909 23:44:52.496534 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:52.498582 kubelet[2924]: E0909 23:44:52.496851 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:52.498582 kubelet[2924]: E0909 23:44:52.497944 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:53.497386 kubelet[2924]: E0909 23:44:53.497343 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:53.500574 kubelet[2924]: E0909 23:44:53.500525 2924 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-236\" not found" node="ip-172-31-27-236" Sep 9 23:44:53.738195 kubelet[2924]: I0909 23:44:53.738152 2924 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-236" Sep 9 23:44:53.767774 kubelet[2924]: I0909 23:44:53.767157 2924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:53.809180 kubelet[2924]: E0909 23:44:53.808797 2924 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-236.1863c1e9c305111e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-236,UID:ip-172-31-27-236,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-236,},FirstTimestamp:2025-09-09 23:44:49.348055326 +0000 UTC m=+1.732309737,LastTimestamp:2025-09-09 23:44:49.348055326 +0000 UTC m=+1.732309737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-236,}" Sep 9 23:44:53.865154 kubelet[2924]: E0909 23:44:53.865091 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Sep 9 23:44:53.865503 kubelet[2924]: E0909 23:44:53.865341 2924 kubelet.go:3196] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:53.865503 kubelet[2924]: I0909 23:44:53.865375 2924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:53.893177 kubelet[2924]: E0909 23:44:53.892954 2924 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-236.1863c1e9c6dec51b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-236,UID:ip-172-31-27-236,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-27-236 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-27-236,},FirstTimestamp:2025-09-09 23:44:49.412654363 +0000 UTC m=+1.796908750,LastTimestamp:2025-09-09 23:44:49.412654363 +0000 UTC m=+1.796908750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-236,}" Sep 9 23:44:53.979551 kubelet[2924]: E0909 23:44:53.979183 2924 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-27-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:53.979551 kubelet[2924]: I0909 23:44:53.979232 2924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:53.980007 kubelet[2924]: E0909 23:44:53.979847 2924 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-236.1863c1e9c6deed5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-236,UID:ip-172-31-27-236,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-27-236 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-27-236,},FirstTimestamp:2025-09-09 23:44:49.412664671 +0000 UTC m=+1.796919058,LastTimestamp:2025-09-09 23:44:49.412664671 +0000 UTC m=+1.796919058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-236,}" Sep 9 23:44:53.986375 kubelet[2924]: E0909 23:44:53.986294 2924 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:54.343057 kubelet[2924]: I0909 23:44:54.342973 2924 apiserver.go:52] "Watching apiserver" Sep 9 23:44:54.367414 kubelet[2924]: I0909 23:44:54.367364 2924 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:44:54.496932 kubelet[2924]: I0909 23:44:54.496805 2924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:56.103498 systemd[1]: Reload requested from client PID 3204 ('systemctl') (unit session-7.scope)... Sep 9 23:44:56.103964 systemd[1]: Reloading... Sep 9 23:44:56.416950 zram_generator::config[3249]: No configuration found. Sep 9 23:44:56.917689 systemd[1]: Reloading finished in 813 ms. Sep 9 23:44:56.967111 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:56.969119 kubelet[2924]: I0909 23:44:56.967034 2924 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:44:56.988467 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:44:56.988944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:56.989026 systemd[1]: kubelet.service: Consumed 2.483s CPU time, 127.7M memory peak. Sep 9 23:44:56.995648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:44:57.395605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:44:57.414506 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:44:57.502427 kubelet[3309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:57.503934 kubelet[3309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:44:57.503934 kubelet[3309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:44:57.503934 kubelet[3309]: I0909 23:44:57.503181 3309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:44:57.517503 kubelet[3309]: I0909 23:44:57.517460 3309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:44:57.517687 kubelet[3309]: I0909 23:44:57.517667 3309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:44:57.518265 kubelet[3309]: I0909 23:44:57.518239 3309 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:44:57.520641 kubelet[3309]: I0909 23:44:57.520599 3309 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:44:57.527607 kubelet[3309]: I0909 23:44:57.527560 3309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:44:57.540776 kubelet[3309]: I0909 23:44:57.540714 3309 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:44:57.549821 sudo[3323]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 23:44:57.551155 sudo[3323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 23:44:57.554832 kubelet[3309]: I0909 23:44:57.554738 3309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:44:57.556222 kubelet[3309]: I0909 23:44:57.555932 3309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:44:57.556499 kubelet[3309]: I0909 23:44:57.556197 3309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-236","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:44:57.556638 kubelet[3309]: I0909 23:44:57.556513 3309 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:44:57.556638 kubelet[3309]: I0909 23:44:57.556536 3309 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:44:57.556638 kubelet[3309]: I0909 23:44:57.556613 3309 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:57.559972 kubelet[3309]: I0909 23:44:57.558941 3309 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:44:57.559972 kubelet[3309]: I0909 23:44:57.558996 3309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:44:57.559972 kubelet[3309]: I0909 23:44:57.559044 3309 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:44:57.559972 kubelet[3309]: I0909 23:44:57.559065 3309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:44:57.564275 kubelet[3309]: I0909 23:44:57.564221 3309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:44:57.566564 kubelet[3309]: I0909 23:44:57.566516 3309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:44:57.569924 kubelet[3309]: I0909 23:44:57.568380 3309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:44:57.569924 kubelet[3309]: I0909 23:44:57.568444 3309 server.go:1287] "Started kubelet" Sep 9 23:44:57.579914 kubelet[3309]: I0909 23:44:57.579510 3309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:44:57.580183 kubelet[3309]: I0909 23:44:57.580103 3309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:44:57.582342 kubelet[3309]: I0909 23:44:57.582291 3309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:44:57.582580 kubelet[3309]: I0909 23:44:57.582544 3309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:44:57.586224 kubelet[3309]: I0909 23:44:57.586191 3309 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:44:57.606913 kubelet[3309]: I0909 23:44:57.606782 3309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:44:57.609454 kubelet[3309]: I0909 23:44:57.609215 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:44:57.611436 kubelet[3309]: I0909 23:44:57.611398 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:44:57.611607 kubelet[3309]: I0909 23:44:57.611587 3309 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:44:57.611714 kubelet[3309]: I0909 23:44:57.611695 3309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:44:57.612263 kubelet[3309]: I0909 23:44:57.611803 3309 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:44:57.612263 kubelet[3309]: E0909 23:44:57.611875 3309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:44:57.620073 kubelet[3309]: I0909 23:44:57.619903 3309 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:44:57.620342 kubelet[3309]: E0909 23:44:57.620290 3309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-236\" not found" Sep 9 23:44:57.623037 kubelet[3309]: I0909 23:44:57.622987 3309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:44:57.623247 kubelet[3309]: I0909 23:44:57.623213 3309 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:44:57.647288 kubelet[3309]: E0909 23:44:57.646096 3309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:44:57.647288 kubelet[3309]: I0909 23:44:57.646582 3309 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:44:57.647288 kubelet[3309]: I0909 23:44:57.646739 3309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:44:57.652754 kubelet[3309]: I0909 23:44:57.652177 3309 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:44:57.731182 kubelet[3309]: E0909 23:44:57.730690 3309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 23:44:57.884582 kubelet[3309]: I0909 23:44:57.884543 3309 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:44:57.885312 kubelet[3309]: I0909 23:44:57.884881 3309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:44:57.885784 kubelet[3309]: I0909 23:44:57.885067 3309 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:44:57.887040 kubelet[3309]: I0909 23:44:57.886700 3309 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:44:57.887040 kubelet[3309]: I0909 23:44:57.886737 3309 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:44:57.887040 kubelet[3309]: I0909 23:44:57.886778 3309 policy_none.go:49] "None policy: Start" Sep 9 23:44:57.887040 kubelet[3309]: I0909 23:44:57.886797 3309 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:44:57.887040 kubelet[3309]: I0909 23:44:57.886821 3309 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:44:57.887827 kubelet[3309]: I0909 23:44:57.887701 3309 state_mem.go:75] "Updated machine memory state" Sep 9 23:44:57.906487 kubelet[3309]: I0909 23:44:57.905472 3309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:44:57.906487 kubelet[3309]: I0909 23:44:57.905738 3309 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:44:57.906487 kubelet[3309]: I0909 23:44:57.905757 3309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:44:57.909573 kubelet[3309]: I0909 23:44:57.909544 3309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:44:57.916209 kubelet[3309]: E0909 23:44:57.914981 3309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:44:57.933546 kubelet[3309]: I0909 23:44:57.932669 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:57.937169 kubelet[3309]: I0909 23:44:57.937134 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:57.941272 kubelet[3309]: I0909 23:44:57.940559 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:57.981384 kubelet[3309]: E0909 23:44:57.981342 3309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-236\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:58.039529 kubelet[3309]: I0909 23:44:58.039426 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e608fce363683622ff37cc6ab63b494-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-236\" (UID: \"2e608fce363683622ff37cc6ab63b494\") " pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:58.039907 kubelet[3309]: I0909 23:44:58.039815 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:58.040059 kubelet[3309]: I0909 23:44:58.040030 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:58.040210 kubelet[3309]: I0909 23:44:58.040188 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:58.040414 kubelet[3309]: I0909 23:44:58.040359 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a41421f31629ace7a1d17d58bca76db-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-236\" (UID: \"0a41421f31629ace7a1d17d58bca76db\") " pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:58.040533 kubelet[3309]: I0909 23:44:58.040511 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e608fce363683622ff37cc6ab63b494-ca-certs\") pod \"kube-apiserver-ip-172-31-27-236\" (UID: \"2e608fce363683622ff37cc6ab63b494\") " pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:58.040715 kubelet[3309]: I0909 23:44:58.040692 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e608fce363683622ff37cc6ab63b494-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-236\" (UID: \"2e608fce363683622ff37cc6ab63b494\") " pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:58.040926 kubelet[3309]: I0909 23:44:58.040866 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:58.041324 kubelet[3309]: I0909 23:44:58.041187 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df9faacfa8612280f8e0066f51abcefe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-236\" (UID: \"df9faacfa8612280f8e0066f51abcefe\") " pod="kube-system/kube-controller-manager-ip-172-31-27-236" Sep 9 23:44:58.048687 kubelet[3309]: I0909 23:44:58.048258 3309 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-236" Sep 9 23:44:58.074493 kubelet[3309]: I0909 23:44:58.074457 3309 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-27-236" Sep 9 23:44:58.075006 kubelet[3309]: I0909 23:44:58.074869 3309 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-236" Sep 9 23:44:58.410825 sudo[3323]: pam_unix(sudo:session): session closed for user root Sep 9 23:44:58.561402 kubelet[3309]: I0909 23:44:58.561312 3309 apiserver.go:52] "Watching apiserver" Sep 9 23:44:58.623747 kubelet[3309]: I0909 23:44:58.623682 3309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:44:58.783104 kubelet[3309]: I0909 23:44:58.782842 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:58.785280 kubelet[3309]: I0909 23:44:58.785232 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:58.795592 kubelet[3309]: E0909 23:44:58.795537 3309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-236\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-236" Sep 9 23:44:58.802551 kubelet[3309]: E0909 23:44:58.802175 3309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-236\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-236" Sep 9 23:44:58.824649 kubelet[3309]: I0909 23:44:58.824558 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-236" podStartSLOduration=4.82453857 podStartE2EDuration="4.82453857s" podCreationTimestamp="2025-09-09 23:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:58.82430427 +0000 UTC m=+1.402066076" watchObservedRunningTime="2025-09-09 23:44:58.82453857 +0000 UTC m=+1.402300352" Sep 9 23:44:58.841159 kubelet[3309]: I0909 23:44:58.841080 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-236" podStartSLOduration=1.840868446 podStartE2EDuration="1.840868446s" podCreationTimestamp="2025-09-09 23:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:58.839654958 +0000 UTC m=+1.417416776" watchObservedRunningTime="2025-09-09 23:44:58.840868446 +0000 UTC m=+1.418630252" Sep 9 23:44:58.882497 kubelet[3309]: I0909 23:44:58.882409 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-236" podStartSLOduration=1.882386334 podStartE2EDuration="1.882386334s" podCreationTimestamp="2025-09-09 23:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:44:58.859097298 +0000 UTC m=+1.436859128" watchObservedRunningTime="2025-09-09 23:44:58.882386334 +0000 UTC m=+1.460148212" Sep 9 23:45:00.997855 sudo[2347]: pam_unix(sudo:session): session closed for user root Sep 9 23:45:01.021320 sshd[2346]: Connection closed by 139.178.89.65 port 55078 Sep 9 23:45:01.022388 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:01.030754 systemd[1]: sshd@6-172.31.27.236:22-139.178.89.65:55078.service: Deactivated successfully. Sep 9 23:45:01.036748 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:45:01.037440 systemd[1]: session-7.scope: Consumed 8.996s CPU time, 262.5M memory peak. Sep 9 23:45:01.040008 systemd-logind[1977]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:45:01.043344 systemd-logind[1977]: Removed session 7. Sep 9 23:45:02.344219 kubelet[3309]: I0909 23:45:02.343652 3309 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:45:02.346463 kubelet[3309]: I0909 23:45:02.345054 3309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:45:02.346618 containerd[1995]: time="2025-09-09T23:45:02.344708155Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:45:03.387345 systemd[1]: Created slice kubepods-besteffort-podcb7f1203_799b_455d_aa7a_7b47278eaa9f.slice - libcontainer container kubepods-besteffort-podcb7f1203_799b_455d_aa7a_7b47278eaa9f.slice. Sep 9 23:45:03.419511 systemd[1]: Created slice kubepods-burstable-pod8ca5a529_4b3c_4c0f_a232_fe5bcc8e4fb8.slice - libcontainer container kubepods-burstable-pod8ca5a529_4b3c_4c0f_a232_fe5bcc8e4fb8.slice. Sep 9 23:45:03.478971 kubelet[3309]: I0909 23:45:03.478282 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hubble-tls\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.478971 kubelet[3309]: I0909 23:45:03.478358 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb7f1203-799b-455d-aa7a-7b47278eaa9f-xtables-lock\") pod \"kube-proxy-29xx8\" (UID: \"cb7f1203-799b-455d-aa7a-7b47278eaa9f\") " pod="kube-system/kube-proxy-29xx8" Sep 9 23:45:03.478971 kubelet[3309]: I0909 23:45:03.478407 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb7f1203-799b-455d-aa7a-7b47278eaa9f-kube-proxy\") pod \"kube-proxy-29xx8\" (UID: \"cb7f1203-799b-455d-aa7a-7b47278eaa9f\") " pod="kube-system/kube-proxy-29xx8" Sep 9 23:45:03.478971 kubelet[3309]: I0909 23:45:03.478447 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cni-path\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.478971 kubelet[3309]: I0909 23:45:03.478485 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv5f7\" (UniqueName: \"kubernetes.io/projected/cb7f1203-799b-455d-aa7a-7b47278eaa9f-kube-api-access-sv5f7\") pod \"kube-proxy-29xx8\" (UID: \"cb7f1203-799b-455d-aa7a-7b47278eaa9f\") " pod="kube-system/kube-proxy-29xx8" Sep 9 23:45:03.478971 kubelet[3309]: I0909 23:45:03.478580 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-bpf-maps\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.479735 kubelet[3309]: I0909 23:45:03.478625 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-clustermesh-secrets\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.479735 kubelet[3309]: I0909 23:45:03.478663 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r828p\" (UniqueName: \"kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-kube-api-access-r828p\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.479735 kubelet[3309]: I0909 23:45:03.478702 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-run\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.479735 kubelet[3309]: I0909 23:45:03.478736 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-net\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.479735 kubelet[3309]: I0909 23:45:03.478773 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-xtables-lock\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.479735 kubelet[3309]: I0909 23:45:03.478827 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hostproc\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.481907 kubelet[3309]: I0909 23:45:03.481747 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-kernel\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.482185 kubelet[3309]: I0909 23:45:03.481858 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb7f1203-799b-455d-aa7a-7b47278eaa9f-lib-modules\") pod \"kube-proxy-29xx8\" (UID: \"cb7f1203-799b-455d-aa7a-7b47278eaa9f\") " pod="kube-system/kube-proxy-29xx8" Sep 9 23:45:03.482185 kubelet[3309]: I0909 23:45:03.482068 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-cgroup\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.482185 kubelet[3309]: I0909 23:45:03.482134 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-etc-cni-netd\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.482567 kubelet[3309]: I0909 23:45:03.482422 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-lib-modules\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.482567 kubelet[3309]: I0909 23:45:03.482498 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-config-path\") pod \"cilium-zc5pc\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " pod="kube-system/cilium-zc5pc" Sep 9 23:45:03.572317 systemd[1]: Created slice kubepods-besteffort-pod3904429f_a1a9_421b_ab9d_bec24c605698.slice - libcontainer container kubepods-besteffort-pod3904429f_a1a9_421b_ab9d_bec24c605698.slice. Sep 9 23:45:03.587924 kubelet[3309]: I0909 23:45:03.587415 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3904429f-a1a9-421b-ab9d-bec24c605698-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fg8dl\" (UID: \"3904429f-a1a9-421b-ab9d-bec24c605698\") " pod="kube-system/cilium-operator-6c4d7847fc-fg8dl" Sep 9 23:45:03.587924 kubelet[3309]: I0909 23:45:03.587563 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-998jr\" (UniqueName: \"kubernetes.io/projected/3904429f-a1a9-421b-ab9d-bec24c605698-kube-api-access-998jr\") pod \"cilium-operator-6c4d7847fc-fg8dl\" (UID: \"3904429f-a1a9-421b-ab9d-bec24c605698\") " pod="kube-system/cilium-operator-6c4d7847fc-fg8dl" Sep 9 23:45:03.709926 containerd[1995]: time="2025-09-09T23:45:03.707773462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29xx8,Uid:cb7f1203-799b-455d-aa7a-7b47278eaa9f,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:03.729997 containerd[1995]: time="2025-09-09T23:45:03.729622138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zc5pc,Uid:8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:03.768372 containerd[1995]: time="2025-09-09T23:45:03.768297274Z" level=info msg="connecting to shim 016c70788efe656d6daf16c870facff50126219a5ad4280c51e275184d318987" address="unix:///run/containerd/s/3944fff83902a02e6ef209518f89726bb20be837482e151a74c99f4f7709d7c7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:03.785674 containerd[1995]: time="2025-09-09T23:45:03.785604958Z" level=info msg="connecting to shim 9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1" address="unix:///run/containerd/s/8496337511f20b436a011672b46129ccf6a511e481893683586c780b684981ba" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:03.817259 systemd[1]: Started cri-containerd-016c70788efe656d6daf16c870facff50126219a5ad4280c51e275184d318987.scope - libcontainer container 016c70788efe656d6daf16c870facff50126219a5ad4280c51e275184d318987. Sep 9 23:45:03.855226 systemd[1]: Started cri-containerd-9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1.scope - libcontainer container 9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1. Sep 9 23:45:03.882336 containerd[1995]: time="2025-09-09T23:45:03.882200579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fg8dl,Uid:3904429f-a1a9-421b-ab9d-bec24c605698,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:03.930854 containerd[1995]: time="2025-09-09T23:45:03.930358079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zc5pc,Uid:8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\"" Sep 9 23:45:03.938478 containerd[1995]: time="2025-09-09T23:45:03.938216351Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 23:45:03.945855 containerd[1995]: time="2025-09-09T23:45:03.945806639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29xx8,Uid:cb7f1203-799b-455d-aa7a-7b47278eaa9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"016c70788efe656d6daf16c870facff50126219a5ad4280c51e275184d318987\"" Sep 9 23:45:03.946755 containerd[1995]: time="2025-09-09T23:45:03.946256459Z" level=info msg="connecting to shim 89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4" address="unix:///run/containerd/s/c29e780e506ab8b244e969f4f0c7ba99b7af681d3d42506f32ee1463f929630c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:03.955275 containerd[1995]: time="2025-09-09T23:45:03.954569675Z" level=info msg="CreateContainer within sandbox \"016c70788efe656d6daf16c870facff50126219a5ad4280c51e275184d318987\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:45:03.984792 containerd[1995]: time="2025-09-09T23:45:03.984234839Z" level=info msg="Container 490c063c286de9020224db1657da7800abfc17c81414b8a384316bcb7c1036f0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:03.998218 systemd[1]: Started cri-containerd-89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4.scope - libcontainer container 89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4. Sep 9 23:45:04.006766 containerd[1995]: time="2025-09-09T23:45:04.006319183Z" level=info msg="CreateContainer within sandbox \"016c70788efe656d6daf16c870facff50126219a5ad4280c51e275184d318987\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"490c063c286de9020224db1657da7800abfc17c81414b8a384316bcb7c1036f0\"" Sep 9 23:45:04.009208 containerd[1995]: time="2025-09-09T23:45:04.009136495Z" level=info msg="StartContainer for \"490c063c286de9020224db1657da7800abfc17c81414b8a384316bcb7c1036f0\"" Sep 9 23:45:04.013370 containerd[1995]: time="2025-09-09T23:45:04.013239367Z" level=info msg="connecting to shim 490c063c286de9020224db1657da7800abfc17c81414b8a384316bcb7c1036f0" address="unix:///run/containerd/s/3944fff83902a02e6ef209518f89726bb20be837482e151a74c99f4f7709d7c7" protocol=ttrpc version=3 Sep 9 23:45:04.056318 systemd[1]: Started cri-containerd-490c063c286de9020224db1657da7800abfc17c81414b8a384316bcb7c1036f0.scope - libcontainer container 490c063c286de9020224db1657da7800abfc17c81414b8a384316bcb7c1036f0. Sep 9 23:45:04.121185 containerd[1995]: time="2025-09-09T23:45:04.121097744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fg8dl,Uid:3904429f-a1a9-421b-ab9d-bec24c605698,Namespace:kube-system,Attempt:0,} returns sandbox id \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\"" Sep 9 23:45:04.170931 containerd[1995]: time="2025-09-09T23:45:04.170836664Z" level=info msg="StartContainer for \"490c063c286de9020224db1657da7800abfc17c81414b8a384316bcb7c1036f0\" returns successfully" Sep 9 23:45:06.420029 update_engine[1982]: I20250909 23:45:06.419486 1982 update_attempter.cc:509] Updating boot flags... Sep 9 23:45:06.718618 kubelet[3309]: I0909 23:45:06.716932 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29xx8" podStartSLOduration=3.7169087530000002 podStartE2EDuration="3.716908753s" podCreationTimestamp="2025-09-09 23:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:04.833173331 +0000 UTC m=+7.410935137" watchObservedRunningTime="2025-09-09 23:45:06.716908753 +0000 UTC m=+9.294670571" Sep 9 23:45:10.399212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366665543.mount: Deactivated successfully. Sep 9 23:45:13.049003 containerd[1995]: time="2025-09-09T23:45:13.048930988Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:13.050918 containerd[1995]: time="2025-09-09T23:45:13.050840896Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 23:45:13.053372 containerd[1995]: time="2025-09-09T23:45:13.053304232Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:13.057793 containerd[1995]: time="2025-09-09T23:45:13.057655060Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.119361801s" Sep 9 23:45:13.057793 containerd[1995]: time="2025-09-09T23:45:13.057736612Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 23:45:13.060847 containerd[1995]: time="2025-09-09T23:45:13.059749804Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 23:45:13.066324 containerd[1995]: time="2025-09-09T23:45:13.066243568Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:45:13.085280 containerd[1995]: time="2025-09-09T23:45:13.085072816Z" level=info msg="Container 750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:13.099934 containerd[1995]: time="2025-09-09T23:45:13.099851140Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\"" Sep 9 23:45:13.101538 containerd[1995]: time="2025-09-09T23:45:13.100790332Z" level=info msg="StartContainer for \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\"" Sep 9 23:45:13.103122 containerd[1995]: time="2025-09-09T23:45:13.103053196Z" level=info msg="connecting to shim 750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a" address="unix:///run/containerd/s/8496337511f20b436a011672b46129ccf6a511e481893683586c780b684981ba" protocol=ttrpc version=3 Sep 9 23:45:13.143746 systemd[1]: Started cri-containerd-750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a.scope - libcontainer container 750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a. Sep 9 23:45:13.215545 containerd[1995]: time="2025-09-09T23:45:13.215302073Z" level=info msg="StartContainer for \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" returns successfully" Sep 9 23:45:13.237566 systemd[1]: cri-containerd-750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a.scope: Deactivated successfully. Sep 9 23:45:13.244933 containerd[1995]: time="2025-09-09T23:45:13.244682393Z" level=info msg="received exit event container_id:\"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" id:\"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" pid:3904 exited_at:{seconds:1757461513 nanos:243567893}" Sep 9 23:45:13.245081 containerd[1995]: time="2025-09-09T23:45:13.244992485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" id:\"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" pid:3904 exited_at:{seconds:1757461513 nanos:243567893}" Sep 9 23:45:13.285836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a-rootfs.mount: Deactivated successfully. Sep 9 23:45:14.900938 containerd[1995]: time="2025-09-09T23:45:14.900212625Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:45:14.913179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601811116.mount: Deactivated successfully. Sep 9 23:45:14.963867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428936876.mount: Deactivated successfully. Sep 9 23:45:14.967598 containerd[1995]: time="2025-09-09T23:45:14.965991730Z" level=info msg="Container 6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:14.986821 containerd[1995]: time="2025-09-09T23:45:14.986765722Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\"" Sep 9 23:45:14.989248 containerd[1995]: time="2025-09-09T23:45:14.989105830Z" level=info msg="StartContainer for \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\"" Sep 9 23:45:14.996213 containerd[1995]: time="2025-09-09T23:45:14.996121858Z" level=info msg="connecting to shim 6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80" address="unix:///run/containerd/s/8496337511f20b436a011672b46129ccf6a511e481893683586c780b684981ba" protocol=ttrpc version=3 Sep 9 23:45:15.064597 systemd[1]: Started cri-containerd-6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80.scope - libcontainer container 6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80. Sep 9 23:45:15.171994 containerd[1995]: time="2025-09-09T23:45:15.171747343Z" level=info msg="StartContainer for \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" returns successfully" Sep 9 23:45:15.197698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:45:15.198853 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:45:15.199412 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:45:15.204579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:45:15.206084 systemd[1]: cri-containerd-6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80.scope: Deactivated successfully. Sep 9 23:45:15.212581 containerd[1995]: time="2025-09-09T23:45:15.212512087Z" level=info msg="received exit event container_id:\"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" id:\"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" pid:3959 exited_at:{seconds:1757461515 nanos:208455931}" Sep 9 23:45:15.217783 containerd[1995]: time="2025-09-09T23:45:15.213507955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" id:\"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" pid:3959 exited_at:{seconds:1757461515 nanos:208455931}" Sep 9 23:45:15.254653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:45:15.807679 containerd[1995]: time="2025-09-09T23:45:15.807219898Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:15.811872 containerd[1995]: time="2025-09-09T23:45:15.811824310Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 23:45:15.814809 containerd[1995]: time="2025-09-09T23:45:15.814753954Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:45:15.819217 containerd[1995]: time="2025-09-09T23:45:15.819079762Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.759270246s" Sep 9 23:45:15.819217 containerd[1995]: time="2025-09-09T23:45:15.819158278Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 23:45:15.823940 containerd[1995]: time="2025-09-09T23:45:15.823783558Z" level=info msg="CreateContainer within sandbox \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 23:45:15.839752 containerd[1995]: time="2025-09-09T23:45:15.839702014Z" level=info msg="Container 94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:15.857634 containerd[1995]: time="2025-09-09T23:45:15.857559214Z" level=info msg="CreateContainer within sandbox \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\"" Sep 9 23:45:15.860191 containerd[1995]: time="2025-09-09T23:45:15.859296754Z" level=info msg="StartContainer for \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\"" Sep 9 23:45:15.861431 containerd[1995]: time="2025-09-09T23:45:15.861278662Z" level=info msg="connecting to shim 94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928" address="unix:///run/containerd/s/c29e780e506ab8b244e969f4f0c7ba99b7af681d3d42506f32ee1463f929630c" protocol=ttrpc version=3 Sep 9 23:45:15.886384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80-rootfs.mount: Deactivated successfully. Sep 9 23:45:15.918700 containerd[1995]: time="2025-09-09T23:45:15.918628714Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:45:15.921204 systemd[1]: Started cri-containerd-94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928.scope - libcontainer container 94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928. Sep 9 23:45:15.979254 containerd[1995]: time="2025-09-09T23:45:15.979173167Z" level=info msg="Container 80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:16.014346 containerd[1995]: time="2025-09-09T23:45:16.014221051Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\"" Sep 9 23:45:16.015363 containerd[1995]: time="2025-09-09T23:45:16.015235807Z" level=info msg="StartContainer for \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\"" Sep 9 23:45:16.024488 containerd[1995]: time="2025-09-09T23:45:16.024229987Z" level=info msg="connecting to shim 80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4" address="unix:///run/containerd/s/8496337511f20b436a011672b46129ccf6a511e481893683586c780b684981ba" protocol=ttrpc version=3 Sep 9 23:45:16.047879 containerd[1995]: time="2025-09-09T23:45:16.047814907Z" level=info msg="StartContainer for \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" returns successfully" Sep 9 23:45:16.097226 systemd[1]: Started cri-containerd-80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4.scope - libcontainer container 80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4. Sep 9 23:45:16.196823 containerd[1995]: time="2025-09-09T23:45:16.196752776Z" level=info msg="StartContainer for \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" returns successfully" Sep 9 23:45:16.202291 systemd[1]: cri-containerd-80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4.scope: Deactivated successfully. Sep 9 23:45:16.219986 containerd[1995]: time="2025-09-09T23:45:16.219764648Z" level=info msg="received exit event container_id:\"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" id:\"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" pid:4043 exited_at:{seconds:1757461516 nanos:218323856}" Sep 9 23:45:16.219986 containerd[1995]: time="2025-09-09T23:45:16.219840896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" id:\"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" pid:4043 exited_at:{seconds:1757461516 nanos:218323856}" Sep 9 23:45:16.886381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4-rootfs.mount: Deactivated successfully. Sep 9 23:45:16.931734 containerd[1995]: time="2025-09-09T23:45:16.929458439Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:45:16.952936 containerd[1995]: time="2025-09-09T23:45:16.950050152Z" level=info msg="Container 0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:16.969643 containerd[1995]: time="2025-09-09T23:45:16.969549312Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\"" Sep 9 23:45:16.972028 containerd[1995]: time="2025-09-09T23:45:16.971957472Z" level=info msg="StartContainer for \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\"" Sep 9 23:45:16.978079 containerd[1995]: time="2025-09-09T23:45:16.977965464Z" level=info msg="connecting to shim 0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0" address="unix:///run/containerd/s/8496337511f20b436a011672b46129ccf6a511e481893683586c780b684981ba" protocol=ttrpc version=3 Sep 9 23:45:17.044192 systemd[1]: Started cri-containerd-0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0.scope - libcontainer container 0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0. Sep 9 23:45:17.184918 systemd[1]: cri-containerd-0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0.scope: Deactivated successfully. Sep 9 23:45:17.189991 containerd[1995]: time="2025-09-09T23:45:17.189834729Z" level=info msg="StartContainer for \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" returns successfully" Sep 9 23:45:17.193378 containerd[1995]: time="2025-09-09T23:45:17.193308417Z" level=info msg="received exit event container_id:\"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" id:\"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" pid:4087 exited_at:{seconds:1757461517 nanos:191767533}" Sep 9 23:45:17.194300 containerd[1995]: time="2025-09-09T23:45:17.194238297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" id:\"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" pid:4087 exited_at:{seconds:1757461517 nanos:191767533}" Sep 9 23:45:17.252124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0-rootfs.mount: Deactivated successfully. Sep 9 23:45:17.259747 kubelet[3309]: I0909 23:45:17.259636 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fg8dl" podStartSLOduration=2.563022767 podStartE2EDuration="14.259612653s" podCreationTimestamp="2025-09-09 23:45:03 +0000 UTC" firstStartedPulling="2025-09-09 23:45:04.123938876 +0000 UTC m=+6.701700670" lastFinishedPulling="2025-09-09 23:45:15.820528762 +0000 UTC m=+18.398290556" observedRunningTime="2025-09-09 23:45:17.11359772 +0000 UTC m=+19.691359514" watchObservedRunningTime="2025-09-09 23:45:17.259612653 +0000 UTC m=+19.837374435" Sep 9 23:45:17.941471 containerd[1995]: time="2025-09-09T23:45:17.940921524Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:45:17.975853 containerd[1995]: time="2025-09-09T23:45:17.975217729Z" level=info msg="Container 872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:17.990245 containerd[1995]: time="2025-09-09T23:45:17.990196645Z" level=info msg="CreateContainer within sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\"" Sep 9 23:45:17.991612 containerd[1995]: time="2025-09-09T23:45:17.991549321Z" level=info msg="StartContainer for \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\"" Sep 9 23:45:17.997221 containerd[1995]: time="2025-09-09T23:45:17.997161985Z" level=info msg="connecting to shim 872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13" address="unix:///run/containerd/s/8496337511f20b436a011672b46129ccf6a511e481893683586c780b684981ba" protocol=ttrpc version=3 Sep 9 23:45:18.050242 systemd[1]: Started cri-containerd-872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13.scope - libcontainer container 872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13. Sep 9 23:45:18.146972 containerd[1995]: time="2025-09-09T23:45:18.146316789Z" level=info msg="StartContainer for \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" returns successfully" Sep 9 23:45:18.284587 containerd[1995]: time="2025-09-09T23:45:18.284414722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" id:\"a429ad5e8b358abd1e16c2e2e88ef4d293e7d3dcee2e8f6b2a86880c630feb3f\" pid:4157 exited_at:{seconds:1757461518 nanos:283355638}" Sep 9 23:45:18.352925 kubelet[3309]: I0909 23:45:18.352587 3309 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:45:18.434642 systemd[1]: Created slice kubepods-burstable-pod8709447c_1b15_4a1d_a21e_adca2f92cdfb.slice - libcontainer container kubepods-burstable-pod8709447c_1b15_4a1d_a21e_adca2f92cdfb.slice. Sep 9 23:45:18.447443 systemd[1]: Created slice kubepods-burstable-pod99eb9b9e_58c0_4a39_b2c8_1c607c98c649.slice - libcontainer container kubepods-burstable-pod99eb9b9e_58c0_4a39_b2c8_1c607c98c649.slice. Sep 9 23:45:18.515602 kubelet[3309]: I0909 23:45:18.515510 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99eb9b9e-58c0-4a39-b2c8-1c607c98c649-config-volume\") pod \"coredns-668d6bf9bc-8zkjk\" (UID: \"99eb9b9e-58c0-4a39-b2c8-1c607c98c649\") " pod="kube-system/coredns-668d6bf9bc-8zkjk" Sep 9 23:45:18.515602 kubelet[3309]: I0909 23:45:18.515595 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnxl\" (UniqueName: \"kubernetes.io/projected/8709447c-1b15-4a1d-a21e-adca2f92cdfb-kube-api-access-vqnxl\") pod \"coredns-668d6bf9bc-t9ksv\" (UID: \"8709447c-1b15-4a1d-a21e-adca2f92cdfb\") " pod="kube-system/coredns-668d6bf9bc-t9ksv" Sep 9 23:45:18.515602 kubelet[3309]: I0909 23:45:18.515647 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklm9\" (UniqueName: \"kubernetes.io/projected/99eb9b9e-58c0-4a39-b2c8-1c607c98c649-kube-api-access-pklm9\") pod \"coredns-668d6bf9bc-8zkjk\" (UID: \"99eb9b9e-58c0-4a39-b2c8-1c607c98c649\") " pod="kube-system/coredns-668d6bf9bc-8zkjk" Sep 9 23:45:18.516225 kubelet[3309]: I0909 23:45:18.516109 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8709447c-1b15-4a1d-a21e-adca2f92cdfb-config-volume\") pod \"coredns-668d6bf9bc-t9ksv\" (UID: \"8709447c-1b15-4a1d-a21e-adca2f92cdfb\") " pod="kube-system/coredns-668d6bf9bc-t9ksv" Sep 9 23:45:18.743491 containerd[1995]: time="2025-09-09T23:45:18.743428164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ksv,Uid:8709447c-1b15-4a1d-a21e-adca2f92cdfb,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:18.759793 containerd[1995]: time="2025-09-09T23:45:18.759728617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8zkjk,Uid:99eb9b9e-58c0-4a39-b2c8-1c607c98c649,Namespace:kube-system,Attempt:0,}" Sep 9 23:45:18.986742 kubelet[3309]: I0909 23:45:18.986623 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zc5pc" podStartSLOduration=6.862666493 podStartE2EDuration="15.986601158s" podCreationTimestamp="2025-09-09 23:45:03 +0000 UTC" firstStartedPulling="2025-09-09 23:45:03.935397515 +0000 UTC m=+6.513159309" lastFinishedPulling="2025-09-09 23:45:13.059332096 +0000 UTC m=+15.637093974" observedRunningTime="2025-09-09 23:45:18.985806818 +0000 UTC m=+21.563568636" watchObservedRunningTime="2025-09-09 23:45:18.986601158 +0000 UTC m=+21.564362952" Sep 9 23:45:21.325045 systemd-networkd[1894]: cilium_host: Link UP Sep 9 23:45:21.327324 systemd-networkd[1894]: cilium_net: Link UP Sep 9 23:45:21.328426 systemd-networkd[1894]: cilium_net: Gained carrier Sep 9 23:45:21.328763 (udev-worker)[4219]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:21.330551 systemd-networkd[1894]: cilium_host: Gained carrier Sep 9 23:45:21.333549 (udev-worker)[4252]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:21.416162 systemd-networkd[1894]: cilium_net: Gained IPv6LL Sep 9 23:45:21.509182 (udev-worker)[4263]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:21.533797 systemd-networkd[1894]: cilium_vxlan: Link UP Sep 9 23:45:21.533812 systemd-networkd[1894]: cilium_vxlan: Gained carrier Sep 9 23:45:22.106138 kernel: NET: Registered PF_ALG protocol family Sep 9 23:45:22.216590 systemd-networkd[1894]: cilium_host: Gained IPv6LL Sep 9 23:45:22.856076 systemd-networkd[1894]: cilium_vxlan: Gained IPv6LL Sep 9 23:45:23.448842 systemd-networkd[1894]: lxc_health: Link UP Sep 9 23:45:23.455464 (udev-worker)[4264]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:45:23.458951 systemd-networkd[1894]: lxc_health: Gained carrier Sep 9 23:45:23.862350 systemd-networkd[1894]: lxce6cd7e866981: Link UP Sep 9 23:45:23.871643 kernel: eth0: renamed from tmpe8c72 Sep 9 23:45:23.886263 systemd-networkd[1894]: lxce6cd7e866981: Gained carrier Sep 9 23:45:23.886604 systemd-networkd[1894]: lxc32aec4913311: Link UP Sep 9 23:45:23.888957 kernel: eth0: renamed from tmp7c2a8 Sep 9 23:45:23.895027 systemd-networkd[1894]: lxc32aec4913311: Gained carrier Sep 9 23:45:25.288335 systemd-networkd[1894]: lxce6cd7e866981: Gained IPv6LL Sep 9 23:45:25.480465 systemd-networkd[1894]: lxc_health: Gained IPv6LL Sep 9 23:45:25.800318 systemd-networkd[1894]: lxc32aec4913311: Gained IPv6LL Sep 9 23:45:27.816251 ntpd[1971]: Listen normally on 8 cilium_host 192.168.0.92:123 Sep 9 23:45:27.816382 ntpd[1971]: Listen normally on 9 cilium_net [fe80::1813:feff:fe6c:9eb6%4]:123 Sep 9 23:45:27.816802 ntpd[1971]: 9 Sep 23:45:27 ntpd[1971]: Listen normally on 8 cilium_host 192.168.0.92:123 Sep 9 23:45:27.816802 ntpd[1971]: 9 Sep 23:45:27 ntpd[1971]: Listen normally on 9 cilium_net [fe80::1813:feff:fe6c:9eb6%4]:123 Sep 9 23:45:27.816802 ntpd[1971]: 9 Sep 23:45:27 ntpd[1971]: Listen normally on 10 cilium_host [fe80::70cd:56ff:feba:c946%5]:123 Sep 9 23:45:27.816802 ntpd[1971]: 9 Sep 23:45:27 ntpd[1971]: Listen normally on 11 cilium_vxlan [fe80::e:6bff:fe85:6712%6]:123 Sep 9 23:45:27.816802 ntpd[1971]: 9 Sep 23:45:27 ntpd[1971]: Listen normally on 12 lxc_health [fe80::90d6:4bff:fe55:51d6%8]:123 Sep 9 23:45:27.816802 ntpd[1971]: 9 Sep 23:45:27 ntpd[1971]: Listen normally on 13 lxce6cd7e866981 [fe80::44a8:17ff:fe2b:771d%10]:123 Sep 9 23:45:27.816802 ntpd[1971]: 9 Sep 23:45:27 ntpd[1971]: Listen normally on 14 lxc32aec4913311 [fe80::a49e:daff:fea2:1e83%12]:123 Sep 9 23:45:27.816460 ntpd[1971]: Listen normally on 10 cilium_host [fe80::70cd:56ff:feba:c946%5]:123 Sep 9 23:45:27.816524 ntpd[1971]: Listen normally on 11 cilium_vxlan [fe80::e:6bff:fe85:6712%6]:123 Sep 9 23:45:27.816587 ntpd[1971]: Listen normally on 12 lxc_health [fe80::90d6:4bff:fe55:51d6%8]:123 Sep 9 23:45:27.816657 ntpd[1971]: Listen normally on 13 lxce6cd7e866981 [fe80::44a8:17ff:fe2b:771d%10]:123 Sep 9 23:45:27.816721 ntpd[1971]: Listen normally on 14 lxc32aec4913311 [fe80::a49e:daff:fea2:1e83%12]:123 Sep 9 23:45:32.038831 containerd[1995]: time="2025-09-09T23:45:32.038735170Z" level=info msg="connecting to shim 7c2a8e4af119fc43d40575de316380c3b83a23b3444cd5a7ad3f3101f1be2131" address="unix:///run/containerd/s/05594a51f3cdc3e407501f4f56abf261c812768e0155cd05232fde1fbfa8c7df" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:32.093327 containerd[1995]: time="2025-09-09T23:45:32.093207479Z" level=info msg="connecting to shim e8c724c3bb50b8b606b52512bac1c71278fbaa21c37a98880853968fc382e81d" address="unix:///run/containerd/s/6ebabdb5d8ff02ff9f8f6ebbf18e47d88e1906f4051fac95659116ca951ccb6c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:45:32.145223 systemd[1]: Started cri-containerd-7c2a8e4af119fc43d40575de316380c3b83a23b3444cd5a7ad3f3101f1be2131.scope - libcontainer container 7c2a8e4af119fc43d40575de316380c3b83a23b3444cd5a7ad3f3101f1be2131. Sep 9 23:45:32.176207 systemd[1]: Started cri-containerd-e8c724c3bb50b8b606b52512bac1c71278fbaa21c37a98880853968fc382e81d.scope - libcontainer container e8c724c3bb50b8b606b52512bac1c71278fbaa21c37a98880853968fc382e81d. Sep 9 23:45:32.326363 containerd[1995]: time="2025-09-09T23:45:32.326314080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9ksv,Uid:8709447c-1b15-4a1d-a21e-adca2f92cdfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c2a8e4af119fc43d40575de316380c3b83a23b3444cd5a7ad3f3101f1be2131\"" Sep 9 23:45:32.333746 containerd[1995]: time="2025-09-09T23:45:32.333507096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8zkjk,Uid:99eb9b9e-58c0-4a39-b2c8-1c607c98c649,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8c724c3bb50b8b606b52512bac1c71278fbaa21c37a98880853968fc382e81d\"" Sep 9 23:45:32.343301 containerd[1995]: time="2025-09-09T23:45:32.343186740Z" level=info msg="CreateContainer within sandbox \"7c2a8e4af119fc43d40575de316380c3b83a23b3444cd5a7ad3f3101f1be2131\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:45:32.371649 containerd[1995]: time="2025-09-09T23:45:32.371422836Z" level=info msg="CreateContainer within sandbox \"e8c724c3bb50b8b606b52512bac1c71278fbaa21c37a98880853968fc382e81d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:45:32.399072 containerd[1995]: time="2025-09-09T23:45:32.398923128Z" level=info msg="Container 9d7409b8f723344272cac80bcf9221b369afc2d78150fbad46ceb3e68d8ce1d4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:32.402375 containerd[1995]: time="2025-09-09T23:45:32.402196908Z" level=info msg="Container d79d311ec91d97183eaa350229f59111c9fa5f8b8f5005db6d7389f5a3d4596f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:45:32.415128 containerd[1995]: time="2025-09-09T23:45:32.414315816Z" level=info msg="CreateContainer within sandbox \"7c2a8e4af119fc43d40575de316380c3b83a23b3444cd5a7ad3f3101f1be2131\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d7409b8f723344272cac80bcf9221b369afc2d78150fbad46ceb3e68d8ce1d4\"" Sep 9 23:45:32.416664 containerd[1995]: time="2025-09-09T23:45:32.416506716Z" level=info msg="StartContainer for \"9d7409b8f723344272cac80bcf9221b369afc2d78150fbad46ceb3e68d8ce1d4\"" Sep 9 23:45:32.421340 containerd[1995]: time="2025-09-09T23:45:32.421050816Z" level=info msg="connecting to shim 9d7409b8f723344272cac80bcf9221b369afc2d78150fbad46ceb3e68d8ce1d4" address="unix:///run/containerd/s/05594a51f3cdc3e407501f4f56abf261c812768e0155cd05232fde1fbfa8c7df" protocol=ttrpc version=3 Sep 9 23:45:32.426797 containerd[1995]: time="2025-09-09T23:45:32.426698424Z" level=info msg="CreateContainer within sandbox \"e8c724c3bb50b8b606b52512bac1c71278fbaa21c37a98880853968fc382e81d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d79d311ec91d97183eaa350229f59111c9fa5f8b8f5005db6d7389f5a3d4596f\"" Sep 9 23:45:32.430514 containerd[1995]: time="2025-09-09T23:45:32.429870996Z" level=info msg="StartContainer for \"d79d311ec91d97183eaa350229f59111c9fa5f8b8f5005db6d7389f5a3d4596f\"" Sep 9 23:45:32.433640 containerd[1995]: time="2025-09-09T23:45:32.433556928Z" level=info msg="connecting to shim d79d311ec91d97183eaa350229f59111c9fa5f8b8f5005db6d7389f5a3d4596f" address="unix:///run/containerd/s/6ebabdb5d8ff02ff9f8f6ebbf18e47d88e1906f4051fac95659116ca951ccb6c" protocol=ttrpc version=3 Sep 9 23:45:32.482494 systemd[1]: Started cri-containerd-d79d311ec91d97183eaa350229f59111c9fa5f8b8f5005db6d7389f5a3d4596f.scope - libcontainer container d79d311ec91d97183eaa350229f59111c9fa5f8b8f5005db6d7389f5a3d4596f. Sep 9 23:45:32.493335 systemd[1]: Started cri-containerd-9d7409b8f723344272cac80bcf9221b369afc2d78150fbad46ceb3e68d8ce1d4.scope - libcontainer container 9d7409b8f723344272cac80bcf9221b369afc2d78150fbad46ceb3e68d8ce1d4. Sep 9 23:45:32.582923 containerd[1995]: time="2025-09-09T23:45:32.581104525Z" level=info msg="StartContainer for \"9d7409b8f723344272cac80bcf9221b369afc2d78150fbad46ceb3e68d8ce1d4\" returns successfully" Sep 9 23:45:32.596789 containerd[1995]: time="2025-09-09T23:45:32.596733721Z" level=info msg="StartContainer for \"d79d311ec91d97183eaa350229f59111c9fa5f8b8f5005db6d7389f5a3d4596f\" returns successfully" Sep 9 23:45:33.020828 kubelet[3309]: I0909 23:45:33.019818 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8zkjk" podStartSLOduration=30.019797347 podStartE2EDuration="30.019797347s" podCreationTimestamp="2025-09-09 23:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:33.018175487 +0000 UTC m=+35.595937317" watchObservedRunningTime="2025-09-09 23:45:33.019797347 +0000 UTC m=+35.597559129" Sep 9 23:45:33.032512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785326046.mount: Deactivated successfully. Sep 9 23:45:33.055430 kubelet[3309]: I0909 23:45:33.055333 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t9ksv" podStartSLOduration=30.055313976 podStartE2EDuration="30.055313976s" podCreationTimestamp="2025-09-09 23:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:45:33.0550485 +0000 UTC m=+35.632810318" watchObservedRunningTime="2025-09-09 23:45:33.055313976 +0000 UTC m=+35.633075770" Sep 9 23:45:48.463015 systemd[1]: Started sshd@7-172.31.27.236:22-139.178.89.65:55604.service - OpenSSH per-connection server daemon (139.178.89.65:55604). Sep 9 23:45:48.672105 sshd[4792]: Accepted publickey for core from 139.178.89.65 port 55604 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:48.675324 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:48.683113 systemd-logind[1977]: New session 8 of user core. Sep 9 23:45:48.692172 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:45:48.965033 sshd[4795]: Connection closed by 139.178.89.65 port 55604 Sep 9 23:45:48.965855 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:48.972316 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:45:48.974570 systemd[1]: sshd@7-172.31.27.236:22-139.178.89.65:55604.service: Deactivated successfully. Sep 9 23:45:48.979776 systemd-logind[1977]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:45:48.982605 systemd-logind[1977]: Removed session 8. Sep 9 23:45:54.009390 systemd[1]: Started sshd@8-172.31.27.236:22-139.178.89.65:50702.service - OpenSSH per-connection server daemon (139.178.89.65:50702). Sep 9 23:45:54.200951 sshd[4812]: Accepted publickey for core from 139.178.89.65 port 50702 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:54.203740 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:54.211532 systemd-logind[1977]: New session 9 of user core. Sep 9 23:45:54.225145 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:45:54.478051 sshd[4815]: Connection closed by 139.178.89.65 port 50702 Sep 9 23:45:54.478856 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:54.485580 systemd-logind[1977]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:45:54.488209 systemd[1]: sshd@8-172.31.27.236:22-139.178.89.65:50702.service: Deactivated successfully. Sep 9 23:45:54.494761 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:45:54.500397 systemd-logind[1977]: Removed session 9. Sep 9 23:45:59.523024 systemd[1]: Started sshd@9-172.31.27.236:22-139.178.89.65:50718.service - OpenSSH per-connection server daemon (139.178.89.65:50718). Sep 9 23:45:59.719683 sshd[4830]: Accepted publickey for core from 139.178.89.65 port 50718 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:45:59.722112 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:45:59.730338 systemd-logind[1977]: New session 10 of user core. Sep 9 23:45:59.741185 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:45:59.980404 sshd[4833]: Connection closed by 139.178.89.65 port 50718 Sep 9 23:45:59.981409 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Sep 9 23:45:59.987304 systemd[1]: sshd@9-172.31.27.236:22-139.178.89.65:50718.service: Deactivated successfully. Sep 9 23:45:59.991727 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:46:00.000007 systemd-logind[1977]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:46:00.002700 systemd-logind[1977]: Removed session 10. Sep 9 23:46:05.018740 systemd[1]: Started sshd@10-172.31.27.236:22-139.178.89.65:43962.service - OpenSSH per-connection server daemon (139.178.89.65:43962). Sep 9 23:46:05.217505 sshd[4848]: Accepted publickey for core from 139.178.89.65 port 43962 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:05.220036 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:05.227861 systemd-logind[1977]: New session 11 of user core. Sep 9 23:46:05.245411 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:46:05.495223 sshd[4851]: Connection closed by 139.178.89.65 port 43962 Sep 9 23:46:05.496060 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:05.504107 systemd[1]: sshd@10-172.31.27.236:22-139.178.89.65:43962.service: Deactivated successfully. Sep 9 23:46:05.510155 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:46:05.517361 systemd-logind[1977]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:46:05.538224 systemd[1]: Started sshd@11-172.31.27.236:22-139.178.89.65:43976.service - OpenSSH per-connection server daemon (139.178.89.65:43976). Sep 9 23:46:05.540373 systemd-logind[1977]: Removed session 11. Sep 9 23:46:05.749079 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 43976 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:05.752325 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:05.763302 systemd-logind[1977]: New session 12 of user core. Sep 9 23:46:05.772159 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:46:06.097098 sshd[4867]: Connection closed by 139.178.89.65 port 43976 Sep 9 23:46:06.101177 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:06.113803 systemd-logind[1977]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:46:06.116558 systemd[1]: sshd@11-172.31.27.236:22-139.178.89.65:43976.service: Deactivated successfully. Sep 9 23:46:06.124669 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:46:06.153492 systemd-logind[1977]: Removed session 12. Sep 9 23:46:06.156522 systemd[1]: Started sshd@12-172.31.27.236:22-139.178.89.65:43984.service - OpenSSH per-connection server daemon (139.178.89.65:43984). Sep 9 23:46:06.360883 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 43984 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:06.363579 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:06.376004 systemd-logind[1977]: New session 13 of user core. Sep 9 23:46:06.382185 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:46:06.630367 sshd[4879]: Connection closed by 139.178.89.65 port 43984 Sep 9 23:46:06.630182 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:06.638486 systemd[1]: sshd@12-172.31.27.236:22-139.178.89.65:43984.service: Deactivated successfully. Sep 9 23:46:06.643701 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:46:06.645743 systemd-logind[1977]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:46:06.649301 systemd-logind[1977]: Removed session 13. Sep 9 23:46:11.673817 systemd[1]: Started sshd@13-172.31.27.236:22-139.178.89.65:42906.service - OpenSSH per-connection server daemon (139.178.89.65:42906). Sep 9 23:46:11.875575 sshd[4891]: Accepted publickey for core from 139.178.89.65 port 42906 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:11.878003 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:11.887637 systemd-logind[1977]: New session 14 of user core. Sep 9 23:46:11.895177 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:46:12.147923 sshd[4894]: Connection closed by 139.178.89.65 port 42906 Sep 9 23:46:12.148159 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:12.156030 systemd[1]: sshd@13-172.31.27.236:22-139.178.89.65:42906.service: Deactivated successfully. Sep 9 23:46:12.160301 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:46:12.163231 systemd-logind[1977]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:46:12.166397 systemd-logind[1977]: Removed session 14. Sep 9 23:46:17.187055 systemd[1]: Started sshd@14-172.31.27.236:22-139.178.89.65:42922.service - OpenSSH per-connection server daemon (139.178.89.65:42922). Sep 9 23:46:17.398707 sshd[4906]: Accepted publickey for core from 139.178.89.65 port 42922 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:17.401092 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:17.410066 systemd-logind[1977]: New session 15 of user core. Sep 9 23:46:17.418166 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:46:17.671473 sshd[4909]: Connection closed by 139.178.89.65 port 42922 Sep 9 23:46:17.672302 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:17.680108 systemd[1]: sshd@14-172.31.27.236:22-139.178.89.65:42922.service: Deactivated successfully. Sep 9 23:46:17.686317 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:46:17.690398 systemd-logind[1977]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:46:17.693207 systemd-logind[1977]: Removed session 15. Sep 9 23:46:22.706257 systemd[1]: Started sshd@15-172.31.27.236:22-139.178.89.65:48606.service - OpenSSH per-connection server daemon (139.178.89.65:48606). Sep 9 23:46:22.904373 sshd[4922]: Accepted publickey for core from 139.178.89.65 port 48606 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:22.906853 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:22.916743 systemd-logind[1977]: New session 16 of user core. Sep 9 23:46:22.920148 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:46:23.167949 sshd[4925]: Connection closed by 139.178.89.65 port 48606 Sep 9 23:46:23.168777 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:23.176539 systemd[1]: sshd@15-172.31.27.236:22-139.178.89.65:48606.service: Deactivated successfully. Sep 9 23:46:23.180698 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:46:23.184015 systemd-logind[1977]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:46:23.186794 systemd-logind[1977]: Removed session 16. Sep 9 23:46:28.209371 systemd[1]: Started sshd@16-172.31.27.236:22-139.178.89.65:48620.service - OpenSSH per-connection server daemon (139.178.89.65:48620). Sep 9 23:46:28.410135 sshd[4938]: Accepted publickey for core from 139.178.89.65 port 48620 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:28.412531 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:28.420779 systemd-logind[1977]: New session 17 of user core. Sep 9 23:46:28.430194 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:46:28.677809 sshd[4941]: Connection closed by 139.178.89.65 port 48620 Sep 9 23:46:28.679182 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:28.686963 systemd-logind[1977]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:46:28.687450 systemd[1]: sshd@16-172.31.27.236:22-139.178.89.65:48620.service: Deactivated successfully. Sep 9 23:46:28.691883 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:46:28.698036 systemd-logind[1977]: Removed session 17. Sep 9 23:46:28.719071 systemd[1]: Started sshd@17-172.31.27.236:22-139.178.89.65:48624.service - OpenSSH per-connection server daemon (139.178.89.65:48624). Sep 9 23:46:28.914385 sshd[4953]: Accepted publickey for core from 139.178.89.65 port 48624 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:28.916728 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:28.926850 systemd-logind[1977]: New session 18 of user core. Sep 9 23:46:28.932144 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:46:29.251733 sshd[4956]: Connection closed by 139.178.89.65 port 48624 Sep 9 23:46:29.252557 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:29.259852 systemd[1]: sshd@17-172.31.27.236:22-139.178.89.65:48624.service: Deactivated successfully. Sep 9 23:46:29.264418 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:46:29.268026 systemd-logind[1977]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:46:29.270357 systemd-logind[1977]: Removed session 18. Sep 9 23:46:29.287748 systemd[1]: Started sshd@18-172.31.27.236:22-139.178.89.65:48632.service - OpenSSH per-connection server daemon (139.178.89.65:48632). Sep 9 23:46:29.485709 sshd[4966]: Accepted publickey for core from 139.178.89.65 port 48632 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:29.488093 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:29.495866 systemd-logind[1977]: New session 19 of user core. Sep 9 23:46:29.509589 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:46:30.418123 sshd[4969]: Connection closed by 139.178.89.65 port 48632 Sep 9 23:46:30.420717 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:30.429873 systemd[1]: sshd@18-172.31.27.236:22-139.178.89.65:48632.service: Deactivated successfully. Sep 9 23:46:30.439068 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:46:30.445148 systemd-logind[1977]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:46:30.467070 systemd[1]: Started sshd@19-172.31.27.236:22-139.178.89.65:47242.service - OpenSSH per-connection server daemon (139.178.89.65:47242). Sep 9 23:46:30.470746 systemd-logind[1977]: Removed session 19. Sep 9 23:46:30.672114 sshd[4985]: Accepted publickey for core from 139.178.89.65 port 47242 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:30.673978 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:30.683982 systemd-logind[1977]: New session 20 of user core. Sep 9 23:46:30.689167 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:46:31.186528 sshd[4988]: Connection closed by 139.178.89.65 port 47242 Sep 9 23:46:31.187336 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:31.196316 systemd[1]: sshd@19-172.31.27.236:22-139.178.89.65:47242.service: Deactivated successfully. Sep 9 23:46:31.204245 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:46:31.207326 systemd-logind[1977]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:46:31.224395 systemd[1]: Started sshd@20-172.31.27.236:22-139.178.89.65:47258.service - OpenSSH per-connection server daemon (139.178.89.65:47258). Sep 9 23:46:31.226522 systemd-logind[1977]: Removed session 20. Sep 9 23:46:31.419584 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 47258 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:31.421941 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:31.431975 systemd-logind[1977]: New session 21 of user core. Sep 9 23:46:31.439149 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:46:31.679144 sshd[5001]: Connection closed by 139.178.89.65 port 47258 Sep 9 23:46:31.679956 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:31.687651 systemd[1]: sshd@20-172.31.27.236:22-139.178.89.65:47258.service: Deactivated successfully. Sep 9 23:46:31.691486 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:46:31.698945 systemd-logind[1977]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:46:31.703632 systemd-logind[1977]: Removed session 21. Sep 9 23:46:36.722399 systemd[1]: Started sshd@21-172.31.27.236:22-139.178.89.65:47274.service - OpenSSH per-connection server daemon (139.178.89.65:47274). Sep 9 23:46:36.916129 sshd[5015]: Accepted publickey for core from 139.178.89.65 port 47274 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:36.918714 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:36.928005 systemd-logind[1977]: New session 22 of user core. Sep 9 23:46:36.939170 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:46:37.190571 sshd[5018]: Connection closed by 139.178.89.65 port 47274 Sep 9 23:46:37.191422 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:37.198168 systemd[1]: sshd@21-172.31.27.236:22-139.178.89.65:47274.service: Deactivated successfully. Sep 9 23:46:37.204480 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:46:37.206930 systemd-logind[1977]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:46:37.210584 systemd-logind[1977]: Removed session 22. Sep 9 23:46:42.233263 systemd[1]: Started sshd@22-172.31.27.236:22-139.178.89.65:32872.service - OpenSSH per-connection server daemon (139.178.89.65:32872). Sep 9 23:46:42.433519 sshd[5032]: Accepted publickey for core from 139.178.89.65 port 32872 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:42.436107 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:42.448992 systemd-logind[1977]: New session 23 of user core. Sep 9 23:46:42.456186 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:46:42.697520 sshd[5035]: Connection closed by 139.178.89.65 port 32872 Sep 9 23:46:42.697398 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:42.704453 systemd[1]: sshd@22-172.31.27.236:22-139.178.89.65:32872.service: Deactivated successfully. Sep 9 23:46:42.709144 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:46:42.711322 systemd-logind[1977]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:46:42.714511 systemd-logind[1977]: Removed session 23. Sep 9 23:46:47.734325 systemd[1]: Started sshd@23-172.31.27.236:22-139.178.89.65:32876.service - OpenSSH per-connection server daemon (139.178.89.65:32876). Sep 9 23:46:47.921674 sshd[5046]: Accepted publickey for core from 139.178.89.65 port 32876 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:47.924180 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:47.932436 systemd-logind[1977]: New session 24 of user core. Sep 9 23:46:47.941633 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:46:48.186573 sshd[5049]: Connection closed by 139.178.89.65 port 32876 Sep 9 23:46:48.187409 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:48.195050 systemd[1]: sshd@23-172.31.27.236:22-139.178.89.65:32876.service: Deactivated successfully. Sep 9 23:46:48.200569 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:46:48.204521 systemd-logind[1977]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:46:48.209433 systemd-logind[1977]: Removed session 24. Sep 9 23:46:53.233536 systemd[1]: Started sshd@24-172.31.27.236:22-139.178.89.65:41616.service - OpenSSH per-connection server daemon (139.178.89.65:41616). Sep 9 23:46:53.423778 sshd[5062]: Accepted publickey for core from 139.178.89.65 port 41616 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:53.426184 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:53.435529 systemd-logind[1977]: New session 25 of user core. Sep 9 23:46:53.444150 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 23:46:53.682577 sshd[5065]: Connection closed by 139.178.89.65 port 41616 Sep 9 23:46:53.683470 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:53.690503 systemd[1]: sshd@24-172.31.27.236:22-139.178.89.65:41616.service: Deactivated successfully. Sep 9 23:46:53.694595 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 23:46:53.697318 systemd-logind[1977]: Session 25 logged out. Waiting for processes to exit. Sep 9 23:46:53.700399 systemd-logind[1977]: Removed session 25. Sep 9 23:46:53.722339 systemd[1]: Started sshd@25-172.31.27.236:22-139.178.89.65:41626.service - OpenSSH per-connection server daemon (139.178.89.65:41626). Sep 9 23:46:53.914550 sshd[5076]: Accepted publickey for core from 139.178.89.65 port 41626 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:53.916908 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:53.924694 systemd-logind[1977]: New session 26 of user core. Sep 9 23:46:53.933212 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 23:46:56.488128 containerd[1995]: time="2025-09-09T23:46:56.487874986Z" level=info msg="StopContainer for \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" with timeout 30 (s)" Sep 9 23:46:56.492076 containerd[1995]: time="2025-09-09T23:46:56.491988898Z" level=info msg="Stop container \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" with signal terminated" Sep 9 23:46:56.530162 systemd[1]: cri-containerd-94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928.scope: Deactivated successfully. Sep 9 23:46:56.539521 containerd[1995]: time="2025-09-09T23:46:56.539444194Z" level=info msg="received exit event container_id:\"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" id:\"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" pid:4013 exited_at:{seconds:1757461616 nanos:538453834}" Sep 9 23:46:56.541592 containerd[1995]: time="2025-09-09T23:46:56.541363522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" id:\"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" pid:4013 exited_at:{seconds:1757461616 nanos:538453834}" Sep 9 23:46:56.548050 containerd[1995]: time="2025-09-09T23:46:56.547980382Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:46:56.567661 containerd[1995]: time="2025-09-09T23:46:56.567401650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" id:\"2c72f638b78f561a804c7293d6efd0fa5b5c60a92d722828f300be195d146a42\" pid:5107 exited_at:{seconds:1757461616 nanos:565114450}" Sep 9 23:46:56.570719 containerd[1995]: time="2025-09-09T23:46:56.570674062Z" level=info msg="StopContainer for \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" with timeout 2 (s)" Sep 9 23:46:56.572137 containerd[1995]: time="2025-09-09T23:46:56.572009998Z" level=info msg="Stop container \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" with signal terminated" Sep 9 23:46:56.604785 systemd-networkd[1894]: lxc_health: Link DOWN Sep 9 23:46:56.604800 systemd-networkd[1894]: lxc_health: Lost carrier Sep 9 23:46:56.635541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928-rootfs.mount: Deactivated successfully. Sep 9 23:46:56.636682 containerd[1995]: time="2025-09-09T23:46:56.636567887Z" level=info msg="received exit event container_id:\"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" id:\"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" pid:4125 exited_at:{seconds:1757461616 nanos:635753591}" Sep 9 23:46:56.638856 systemd[1]: cri-containerd-872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13.scope: Deactivated successfully. Sep 9 23:46:56.642100 systemd[1]: cri-containerd-872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13.scope: Consumed 14.195s CPU time, 125.3M memory peak, 136K read from disk, 12.9M written to disk. Sep 9 23:46:56.644880 containerd[1995]: time="2025-09-09T23:46:56.639182759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" id:\"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" pid:4125 exited_at:{seconds:1757461616 nanos:635753591}" Sep 9 23:46:56.656387 containerd[1995]: time="2025-09-09T23:46:56.656303567Z" level=info msg="StopContainer for \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" returns successfully" Sep 9 23:46:56.658616 containerd[1995]: time="2025-09-09T23:46:56.658562375Z" level=info msg="StopPodSandbox for \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\"" Sep 9 23:46:56.659090 containerd[1995]: time="2025-09-09T23:46:56.658967603Z" level=info msg="Container to stop \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:56.682507 systemd[1]: cri-containerd-89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4.scope: Deactivated successfully. Sep 9 23:46:56.689495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13-rootfs.mount: Deactivated successfully. Sep 9 23:46:56.696283 containerd[1995]: time="2025-09-09T23:46:56.695605043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" id:\"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" pid:3509 exit_status:137 exited_at:{seconds:1757461616 nanos:693400499}" Sep 9 23:46:56.710096 containerd[1995]: time="2025-09-09T23:46:56.710039807Z" level=info msg="StopContainer for \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" returns successfully" Sep 9 23:46:56.710711 containerd[1995]: time="2025-09-09T23:46:56.710665487Z" level=info msg="StopPodSandbox for \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\"" Sep 9 23:46:56.710810 containerd[1995]: time="2025-09-09T23:46:56.710774567Z" level=info msg="Container to stop \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:56.710810 containerd[1995]: time="2025-09-09T23:46:56.710801423Z" level=info msg="Container to stop \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:56.710961 containerd[1995]: time="2025-09-09T23:46:56.710822351Z" level=info msg="Container to stop \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:56.710961 containerd[1995]: time="2025-09-09T23:46:56.710842655Z" level=info msg="Container to stop \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:56.710961 containerd[1995]: time="2025-09-09T23:46:56.710863079Z" level=info msg="Container to stop \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:46:56.725232 systemd[1]: cri-containerd-9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1.scope: Deactivated successfully. Sep 9 23:46:56.774018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4-rootfs.mount: Deactivated successfully. Sep 9 23:46:56.780516 containerd[1995]: time="2025-09-09T23:46:56.780177071Z" level=info msg="shim disconnected" id=89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4 namespace=k8s.io Sep 9 23:46:56.780516 containerd[1995]: time="2025-09-09T23:46:56.780244715Z" level=warning msg="cleaning up after shim disconnected" id=89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4 namespace=k8s.io Sep 9 23:46:56.780516 containerd[1995]: time="2025-09-09T23:46:56.780296075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:46:56.803319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1-rootfs.mount: Deactivated successfully. Sep 9 23:46:56.810213 containerd[1995]: time="2025-09-09T23:46:56.810148392Z" level=info msg="shim disconnected" id=9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1 namespace=k8s.io Sep 9 23:46:56.810397 containerd[1995]: time="2025-09-09T23:46:56.810207684Z" level=warning msg="cleaning up after shim disconnected" id=9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1 namespace=k8s.io Sep 9 23:46:56.810397 containerd[1995]: time="2025-09-09T23:46:56.810261420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:46:56.825572 containerd[1995]: time="2025-09-09T23:46:56.825508056Z" level=info msg="received exit event sandbox_id:\"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" exit_status:137 exited_at:{seconds:1757461616 nanos:693400499}" Sep 9 23:46:56.826445 containerd[1995]: time="2025-09-09T23:46:56.826319568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" id:\"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" pid:3454 exit_status:137 exited_at:{seconds:1757461616 nanos:734250323}" Sep 9 23:46:56.826640 containerd[1995]: time="2025-09-09T23:46:56.826592784Z" level=info msg="received exit event sandbox_id:\"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" exit_status:137 exited_at:{seconds:1757461616 nanos:734250323}" Sep 9 23:46:56.829711 containerd[1995]: time="2025-09-09T23:46:56.829033812Z" level=info msg="TearDown network for sandbox \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" successfully" Sep 9 23:46:56.832529 containerd[1995]: time="2025-09-09T23:46:56.829867548Z" level=info msg="StopPodSandbox for \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" returns successfully" Sep 9 23:46:56.832655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1-shm.mount: Deactivated successfully. Sep 9 23:46:56.833088 containerd[1995]: time="2025-09-09T23:46:56.833046792Z" level=info msg="TearDown network for sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" successfully" Sep 9 23:46:56.834916 containerd[1995]: time="2025-09-09T23:46:56.834824436Z" level=info msg="StopPodSandbox for \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" returns successfully" Sep 9 23:46:56.913918 kubelet[3309]: I0909 23:46:56.913038 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-net\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.913918 kubelet[3309]: I0909 23:46:56.913104 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-bpf-maps\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.913918 kubelet[3309]: I0909 23:46:56.913150 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r828p\" (UniqueName: \"kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-kube-api-access-r828p\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.913918 kubelet[3309]: I0909 23:46:56.913212 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-cgroup\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.913918 kubelet[3309]: I0909 23:46:56.913256 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cni-path\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.913918 kubelet[3309]: I0909 23:46:56.913292 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-etc-cni-netd\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.914756 kubelet[3309]: I0909 23:46:56.913323 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-lib-modules\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.914756 kubelet[3309]: I0909 23:46:56.913358 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-xtables-lock\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.914756 kubelet[3309]: I0909 23:46:56.913399 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3904429f-a1a9-421b-ab9d-bec24c605698-cilium-config-path\") pod \"3904429f-a1a9-421b-ab9d-bec24c605698\" (UID: \"3904429f-a1a9-421b-ab9d-bec24c605698\") " Sep 9 23:46:56.917376 kubelet[3309]: I0909 23:46:56.913433 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hostproc\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.917871 kubelet[3309]: I0909 23:46:56.917803 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hubble-tls\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.917979 kubelet[3309]: I0909 23:46:56.917872 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-run\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.917979 kubelet[3309]: I0909 23:46:56.917934 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-kernel\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.918108 kubelet[3309]: I0909 23:46:56.917985 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-clustermesh-secrets\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.918108 kubelet[3309]: I0909 23:46:56.918024 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-config-path\") pod \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\" (UID: \"8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8\") " Sep 9 23:46:56.918108 kubelet[3309]: I0909 23:46:56.918063 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-998jr\" (UniqueName: \"kubernetes.io/projected/3904429f-a1a9-421b-ab9d-bec24c605698-kube-api-access-998jr\") pod \"3904429f-a1a9-421b-ab9d-bec24c605698\" (UID: \"3904429f-a1a9-421b-ab9d-bec24c605698\") " Sep 9 23:46:56.920219 kubelet[3309]: I0909 23:46:56.915991 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.920816 kubelet[3309]: I0909 23:46:56.916030 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.920816 kubelet[3309]: I0909 23:46:56.916053 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.920816 kubelet[3309]: I0909 23:46:56.919224 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.920816 kubelet[3309]: I0909 23:46:56.919285 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.920816 kubelet[3309]: I0909 23:46:56.919308 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.921822 kubelet[3309]: I0909 23:46:56.919442 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.921822 kubelet[3309]: I0909 23:46:56.919472 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.923867 kubelet[3309]: I0909 23:46:56.923706 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.925055 kubelet[3309]: I0909 23:46:56.923808 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 23:46:56.929068 kubelet[3309]: I0909 23:46:56.928989 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-kube-api-access-r828p" (OuterVolumeSpecName: "kube-api-access-r828p") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "kube-api-access-r828p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:46:56.933563 kubelet[3309]: I0909 23:46:56.933436 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3904429f-a1a9-421b-ab9d-bec24c605698-kube-api-access-998jr" (OuterVolumeSpecName: "kube-api-access-998jr") pod "3904429f-a1a9-421b-ab9d-bec24c605698" (UID: "3904429f-a1a9-421b-ab9d-bec24c605698"). InnerVolumeSpecName "kube-api-access-998jr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:46:56.933991 kubelet[3309]: I0909 23:46:56.933811 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:46:56.934600 kubelet[3309]: I0909 23:46:56.934522 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:46:56.936254 kubelet[3309]: I0909 23:46:56.936215 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" (UID: "8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:46:56.936586 kubelet[3309]: I0909 23:46:56.936526 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3904429f-a1a9-421b-ab9d-bec24c605698-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3904429f-a1a9-421b-ab9d-bec24c605698" (UID: "3904429f-a1a9-421b-ab9d-bec24c605698"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:46:57.019229 kubelet[3309]: I0909 23:46:57.019161 3309 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cni-path\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019229 kubelet[3309]: I0909 23:46:57.019220 3309 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-etc-cni-netd\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019246 3309 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-lib-modules\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019271 3309 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-xtables-lock\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019292 3309 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hostproc\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019312 3309 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-hubble-tls\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019332 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-run\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019352 3309 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-kernel\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019380 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3904429f-a1a9-421b-ab9d-bec24c605698-cilium-config-path\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.019433 kubelet[3309]: I0909 23:46:57.019401 3309 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-clustermesh-secrets\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.020003 kubelet[3309]: I0909 23:46:57.019421 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-config-path\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.020003 kubelet[3309]: I0909 23:46:57.019442 3309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-998jr\" (UniqueName: \"kubernetes.io/projected/3904429f-a1a9-421b-ab9d-bec24c605698-kube-api-access-998jr\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.020003 kubelet[3309]: I0909 23:46:57.019462 3309 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-host-proc-sys-net\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.020003 kubelet[3309]: I0909 23:46:57.019483 3309 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-bpf-maps\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.020003 kubelet[3309]: I0909 23:46:57.019503 3309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r828p\" (UniqueName: \"kubernetes.io/projected/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-kube-api-access-r828p\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.020003 kubelet[3309]: I0909 23:46:57.019524 3309 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8-cilium-cgroup\") on node \"ip-172-31-27-236\" DevicePath \"\"" Sep 9 23:46:57.236939 kubelet[3309]: I0909 23:46:57.236151 3309 scope.go:117] "RemoveContainer" containerID="94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928" Sep 9 23:46:57.244152 containerd[1995]: time="2025-09-09T23:46:57.244059610Z" level=info msg="RemoveContainer for \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\"" Sep 9 23:46:57.256583 containerd[1995]: time="2025-09-09T23:46:57.256455094Z" level=info msg="RemoveContainer for \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" returns successfully" Sep 9 23:46:57.259910 kubelet[3309]: I0909 23:46:57.259842 3309 scope.go:117] "RemoveContainer" containerID="94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928" Sep 9 23:46:57.260452 containerd[1995]: time="2025-09-09T23:46:57.260367526Z" level=error msg="ContainerStatus for \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\": not found" Sep 9 23:46:57.262794 systemd[1]: Removed slice kubepods-besteffort-pod3904429f_a1a9_421b_ab9d_bec24c605698.slice - libcontainer container kubepods-besteffort-pod3904429f_a1a9_421b_ab9d_bec24c605698.slice. Sep 9 23:46:57.264588 kubelet[3309]: E0909 23:46:57.263239 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\": not found" containerID="94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928" Sep 9 23:46:57.266577 kubelet[3309]: I0909 23:46:57.265042 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928"} err="failed to get container status \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\": rpc error: code = NotFound desc = an error occurred when try to find container \"94d208297ea8711edd3df0d2b17abee2df8aaa06b9360aa5f8306539d158c928\": not found" Sep 9 23:46:57.266577 kubelet[3309]: I0909 23:46:57.265209 3309 scope.go:117] "RemoveContainer" containerID="872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13" Sep 9 23:46:57.280035 systemd[1]: Removed slice kubepods-burstable-pod8ca5a529_4b3c_4c0f_a232_fe5bcc8e4fb8.slice - libcontainer container kubepods-burstable-pod8ca5a529_4b3c_4c0f_a232_fe5bcc8e4fb8.slice. Sep 9 23:46:57.280267 systemd[1]: kubepods-burstable-pod8ca5a529_4b3c_4c0f_a232_fe5bcc8e4fb8.slice: Consumed 14.386s CPU time, 125.7M memory peak, 136K read from disk, 12.9M written to disk. Sep 9 23:46:57.282527 containerd[1995]: time="2025-09-09T23:46:57.282404770Z" level=info msg="RemoveContainer for \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\"" Sep 9 23:46:57.295246 containerd[1995]: time="2025-09-09T23:46:57.295133890Z" level=info msg="RemoveContainer for \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" returns successfully" Sep 9 23:46:57.295777 kubelet[3309]: I0909 23:46:57.295728 3309 scope.go:117] "RemoveContainer" containerID="0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0" Sep 9 23:46:57.301939 containerd[1995]: time="2025-09-09T23:46:57.301756558Z" level=info msg="RemoveContainer for \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\"" Sep 9 23:46:57.316924 containerd[1995]: time="2025-09-09T23:46:57.316767934Z" level=info msg="RemoveContainer for \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" returns successfully" Sep 9 23:46:57.317407 kubelet[3309]: I0909 23:46:57.317351 3309 scope.go:117] "RemoveContainer" containerID="80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4" Sep 9 23:46:57.324465 containerd[1995]: time="2025-09-09T23:46:57.324422398Z" level=info msg="RemoveContainer for \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\"" Sep 9 23:46:57.340789 containerd[1995]: time="2025-09-09T23:46:57.340710358Z" level=info msg="RemoveContainer for \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" returns successfully" Sep 9 23:46:57.341293 kubelet[3309]: I0909 23:46:57.341264 3309 scope.go:117] "RemoveContainer" containerID="6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80" Sep 9 23:46:57.345924 containerd[1995]: time="2025-09-09T23:46:57.345525706Z" level=info msg="RemoveContainer for \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\"" Sep 9 23:46:57.353952 containerd[1995]: time="2025-09-09T23:46:57.353839342Z" level=info msg="RemoveContainer for \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" returns successfully" Sep 9 23:46:57.354673 kubelet[3309]: I0909 23:46:57.354562 3309 scope.go:117] "RemoveContainer" containerID="750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a" Sep 9 23:46:57.358770 containerd[1995]: time="2025-09-09T23:46:57.358584694Z" level=info msg="RemoveContainer for \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\"" Sep 9 23:46:57.365470 containerd[1995]: time="2025-09-09T23:46:57.365312506Z" level=info msg="RemoveContainer for \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" returns successfully" Sep 9 23:46:57.365867 kubelet[3309]: I0909 23:46:57.365635 3309 scope.go:117] "RemoveContainer" containerID="872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13" Sep 9 23:46:57.366235 containerd[1995]: time="2025-09-09T23:46:57.366187030Z" level=error msg="ContainerStatus for \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\": not found" Sep 9 23:46:57.366782 kubelet[3309]: E0909 23:46:57.366726 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\": not found" containerID="872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13" Sep 9 23:46:57.366873 kubelet[3309]: I0909 23:46:57.366802 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13"} err="failed to get container status \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\": rpc error: code = NotFound desc = an error occurred when try to find container \"872d490f7957872ebca477199d85bde9dc631effbc5470c5bf44c54f979eeb13\": not found" Sep 9 23:46:57.366873 kubelet[3309]: I0909 23:46:57.366842 3309 scope.go:117] "RemoveContainer" containerID="0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0" Sep 9 23:46:57.367395 containerd[1995]: time="2025-09-09T23:46:57.367298626Z" level=error msg="ContainerStatus for \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\": not found" Sep 9 23:46:57.368094 kubelet[3309]: E0909 23:46:57.367643 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\": not found" containerID="0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0" Sep 9 23:46:57.368094 kubelet[3309]: I0909 23:46:57.367685 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0"} err="failed to get container status \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f2a6826b781a79ec82e122c2fcbc7277b0d0f1af6f759ecab5f799745cb5be0\": not found" Sep 9 23:46:57.368094 kubelet[3309]: I0909 23:46:57.367718 3309 scope.go:117] "RemoveContainer" containerID="80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4" Sep 9 23:46:57.368303 containerd[1995]: time="2025-09-09T23:46:57.368026894Z" level=error msg="ContainerStatus for \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\": not found" Sep 9 23:46:57.368715 kubelet[3309]: E0909 23:46:57.368661 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\": not found" containerID="80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4" Sep 9 23:46:57.368793 kubelet[3309]: I0909 23:46:57.368711 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4"} err="failed to get container status \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"80c2784693665b2a0677011714489cd9cd54c71d5996fe25d06ff8021d7df3e4\": not found" Sep 9 23:46:57.368793 kubelet[3309]: I0909 23:46:57.368745 3309 scope.go:117] "RemoveContainer" containerID="6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80" Sep 9 23:46:57.369227 containerd[1995]: time="2025-09-09T23:46:57.369155014Z" level=error msg="ContainerStatus for \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\": not found" Sep 9 23:46:57.369577 kubelet[3309]: E0909 23:46:57.369544 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\": not found" containerID="6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80" Sep 9 23:46:57.369709 kubelet[3309]: I0909 23:46:57.369673 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80"} err="failed to get container status \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\": rpc error: code = NotFound desc = an error occurred when try to find container \"6514ac74c7d933630360ebed1758cdc29282131682e25d34f357da14caa64c80\": not found" Sep 9 23:46:57.369815 kubelet[3309]: I0909 23:46:57.369795 3309 scope.go:117] "RemoveContainer" containerID="750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a" Sep 9 23:46:57.370474 containerd[1995]: time="2025-09-09T23:46:57.370428718Z" level=error msg="ContainerStatus for \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\": not found" Sep 9 23:46:57.371467 kubelet[3309]: E0909 23:46:57.371404 3309 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\": not found" containerID="750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a" Sep 9 23:46:57.373022 kubelet[3309]: I0909 23:46:57.371461 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a"} err="failed to get container status \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\": rpc error: code = NotFound desc = an error occurred when try to find container \"750244eb0f93703c07a2ae768b116a50ab0c54eaadbc6aef1be67c57c37e341a\": not found" Sep 9 23:46:57.631048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4-shm.mount: Deactivated successfully. Sep 9 23:46:57.631230 systemd[1]: var-lib-kubelet-pods-3904429f\x2da1a9\x2d421b\x2dab9d\x2dbec24c605698-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d998jr.mount: Deactivated successfully. Sep 9 23:46:57.631363 systemd[1]: var-lib-kubelet-pods-8ca5a529\x2d4b3c\x2d4c0f\x2da232\x2dfe5bcc8e4fb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr828p.mount: Deactivated successfully. Sep 9 23:46:57.631493 systemd[1]: var-lib-kubelet-pods-8ca5a529\x2d4b3c\x2d4c0f\x2da232\x2dfe5bcc8e4fb8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 23:46:57.631624 systemd[1]: var-lib-kubelet-pods-8ca5a529\x2d4b3c\x2d4c0f\x2da232\x2dfe5bcc8e4fb8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 23:46:57.633118 kubelet[3309]: I0909 23:46:57.632836 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3904429f-a1a9-421b-ab9d-bec24c605698" path="/var/lib/kubelet/pods/3904429f-a1a9-421b-ab9d-bec24c605698/volumes" Sep 9 23:46:57.634797 kubelet[3309]: I0909 23:46:57.634757 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" path="/var/lib/kubelet/pods/8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8/volumes" Sep 9 23:46:57.649991 containerd[1995]: time="2025-09-09T23:46:57.649864944Z" level=info msg="StopPodSandbox for \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\"" Sep 9 23:46:57.651306 containerd[1995]: time="2025-09-09T23:46:57.650728908Z" level=info msg="TearDown network for sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" successfully" Sep 9 23:46:57.651306 containerd[1995]: time="2025-09-09T23:46:57.650763768Z" level=info msg="StopPodSandbox for \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" returns successfully" Sep 9 23:46:57.652055 containerd[1995]: time="2025-09-09T23:46:57.651994068Z" level=info msg="RemovePodSandbox for \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\"" Sep 9 23:46:57.652139 containerd[1995]: time="2025-09-09T23:46:57.652051824Z" level=info msg="Forcibly stopping sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\"" Sep 9 23:46:57.652218 containerd[1995]: time="2025-09-09T23:46:57.652184988Z" level=info msg="TearDown network for sandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" successfully" Sep 9 23:46:57.654506 containerd[1995]: time="2025-09-09T23:46:57.654440400Z" level=info msg="Ensure that sandbox 9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1 in task-service has been cleanup successfully" Sep 9 23:46:57.660933 containerd[1995]: time="2025-09-09T23:46:57.660857076Z" level=info msg="RemovePodSandbox \"9c8bb8198f01f72749621be539f46270866347326716def90ad19c067c0eeab1\" returns successfully" Sep 9 23:46:57.662033 containerd[1995]: time="2025-09-09T23:46:57.661658820Z" level=info msg="StopPodSandbox for \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\"" Sep 9 23:46:57.662033 containerd[1995]: time="2025-09-09T23:46:57.661835328Z" level=info msg="TearDown network for sandbox \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" successfully" Sep 9 23:46:57.662033 containerd[1995]: time="2025-09-09T23:46:57.661860696Z" level=info msg="StopPodSandbox for \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" returns successfully" Sep 9 23:46:57.662659 containerd[1995]: time="2025-09-09T23:46:57.662602908Z" level=info msg="RemovePodSandbox for \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\"" Sep 9 23:46:57.662745 containerd[1995]: time="2025-09-09T23:46:57.662655492Z" level=info msg="Forcibly stopping sandbox \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\"" Sep 9 23:46:57.662823 containerd[1995]: time="2025-09-09T23:46:57.662788512Z" level=info msg="TearDown network for sandbox \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" successfully" Sep 9 23:46:57.664762 containerd[1995]: time="2025-09-09T23:46:57.664697712Z" level=info msg="Ensure that sandbox 89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4 in task-service has been cleanup successfully" Sep 9 23:46:57.671666 containerd[1995]: time="2025-09-09T23:46:57.671522100Z" level=info msg="RemovePodSandbox \"89807440290c741af32af60cf3690546b64b8bd70e62d661f9d84b271c2919b4\" returns successfully" Sep 9 23:46:57.955452 kubelet[3309]: E0909 23:46:57.955316 3309 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:46:58.411013 sshd[5079]: Connection closed by 139.178.89.65 port 41626 Sep 9 23:46:58.411821 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Sep 9 23:46:58.419743 systemd[1]: sshd@25-172.31.27.236:22-139.178.89.65:41626.service: Deactivated successfully. Sep 9 23:46:58.424961 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 23:46:58.425587 systemd[1]: session-26.scope: Consumed 1.798s CPU time, 23.6M memory peak. Sep 9 23:46:58.427065 systemd-logind[1977]: Session 26 logged out. Waiting for processes to exit. Sep 9 23:46:58.430833 systemd-logind[1977]: Removed session 26. Sep 9 23:46:58.445217 systemd[1]: Started sshd@26-172.31.27.236:22-139.178.89.65:41634.service - OpenSSH per-connection server daemon (139.178.89.65:41634). Sep 9 23:46:58.638351 sshd[5235]: Accepted publickey for core from 139.178.89.65 port 41634 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:46:58.640668 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:46:58.649984 systemd-logind[1977]: New session 27 of user core. Sep 9 23:46:58.658213 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 23:46:58.816253 ntpd[1971]: Deleting interface #12 lxc_health, fe80::90d6:4bff:fe55:51d6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Sep 9 23:46:58.816718 ntpd[1971]: 9 Sep 23:46:58 ntpd[1971]: Deleting interface #12 lxc_health, fe80::90d6:4bff:fe55:51d6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Sep 9 23:46:59.757966 kubelet[3309]: I0909 23:46:59.757681 3309 setters.go:602] "Node became not ready" node="ip-172-31-27-236" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T23:46:59Z","lastTransitionTime":"2025-09-09T23:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 23:47:00.356106 sshd[5238]: Connection closed by 139.178.89.65 port 41634 Sep 9 23:47:00.356556 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Sep 9 23:47:00.367835 systemd[1]: sshd@26-172.31.27.236:22-139.178.89.65:41634.service: Deactivated successfully. Sep 9 23:47:00.376882 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 23:47:00.379610 systemd[1]: session-27.scope: Consumed 1.438s CPU time, 23.5M memory peak. Sep 9 23:47:00.382104 systemd-logind[1977]: Session 27 logged out. Waiting for processes to exit. Sep 9 23:47:00.411345 systemd[1]: Started sshd@27-172.31.27.236:22-139.178.89.65:37108.service - OpenSSH per-connection server daemon (139.178.89.65:37108). Sep 9 23:47:00.417022 systemd-logind[1977]: Removed session 27. Sep 9 23:47:00.420501 kubelet[3309]: I0909 23:47:00.420418 3309 memory_manager.go:355] "RemoveStaleState removing state" podUID="8ca5a529-4b3c-4c0f-a232-fe5bcc8e4fb8" containerName="cilium-agent" Sep 9 23:47:00.420501 kubelet[3309]: I0909 23:47:00.420461 3309 memory_manager.go:355] "RemoveStaleState removing state" podUID="3904429f-a1a9-421b-ab9d-bec24c605698" containerName="cilium-operator" Sep 9 23:47:00.451601 systemd[1]: Created slice kubepods-burstable-pod9970ac76_1d82_4d14_8f54_42d341a23ba7.slice - libcontainer container kubepods-burstable-pod9970ac76_1d82_4d14_8f54_42d341a23ba7.slice. Sep 9 23:47:00.543667 kubelet[3309]: I0909 23:47:00.543175 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9970ac76-1d82-4d14-8f54-42d341a23ba7-hubble-tls\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.543882 kubelet[3309]: I0909 23:47:00.543855 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9970ac76-1d82-4d14-8f54-42d341a23ba7-clustermesh-secrets\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.544063 kubelet[3309]: I0909 23:47:00.544037 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-hostproc\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.545152 kubelet[3309]: I0909 23:47:00.544172 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-cilium-cgroup\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.545937 kubelet[3309]: I0909 23:47:00.545470 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9970ac76-1d82-4d14-8f54-42d341a23ba7-cilium-ipsec-secrets\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.545937 kubelet[3309]: I0909 23:47:00.545525 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-host-proc-sys-net\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.545937 kubelet[3309]: I0909 23:47:00.545571 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkbfq\" (UniqueName: \"kubernetes.io/projected/9970ac76-1d82-4d14-8f54-42d341a23ba7-kube-api-access-hkbfq\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.545937 kubelet[3309]: I0909 23:47:00.545611 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-cni-path\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.545937 kubelet[3309]: I0909 23:47:00.545646 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-xtables-lock\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.546219 kubelet[3309]: I0909 23:47:00.545682 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9970ac76-1d82-4d14-8f54-42d341a23ba7-cilium-config-path\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.546219 kubelet[3309]: I0909 23:47:00.545717 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-lib-modules\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.546219 kubelet[3309]: I0909 23:47:00.545754 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-cilium-run\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.546219 kubelet[3309]: I0909 23:47:00.545786 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-bpf-maps\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.546219 kubelet[3309]: I0909 23:47:00.545821 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-etc-cni-netd\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.546219 kubelet[3309]: I0909 23:47:00.545855 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9970ac76-1d82-4d14-8f54-42d341a23ba7-host-proc-sys-kernel\") pod \"cilium-w6npr\" (UID: \"9970ac76-1d82-4d14-8f54-42d341a23ba7\") " pod="kube-system/cilium-w6npr" Sep 9 23:47:00.649714 sshd[5248]: Accepted publickey for core from 139.178.89.65 port 37108 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:47:00.652791 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:47:00.671495 systemd-logind[1977]: New session 28 of user core. Sep 9 23:47:00.710606 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 23:47:00.763377 containerd[1995]: time="2025-09-09T23:47:00.763301583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6npr,Uid:9970ac76-1d82-4d14-8f54-42d341a23ba7,Namespace:kube-system,Attempt:0,}" Sep 9 23:47:00.799795 containerd[1995]: time="2025-09-09T23:47:00.799733199Z" level=info msg="connecting to shim 2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d" address="unix:///run/containerd/s/1c73e76727b0af115913dd85bd877e42634d5ce1919a90cc721f1fbf4649a758" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:47:00.847352 sshd[5255]: Connection closed by 139.178.89.65 port 37108 Sep 9 23:47:00.846279 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Sep 9 23:47:00.846209 systemd[1]: Started cri-containerd-2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d.scope - libcontainer container 2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d. Sep 9 23:47:00.863065 systemd[1]: sshd@27-172.31.27.236:22-139.178.89.65:37108.service: Deactivated successfully. Sep 9 23:47:00.868735 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 23:47:00.874101 systemd-logind[1977]: Session 28 logged out. Waiting for processes to exit. Sep 9 23:47:00.895330 systemd[1]: Started sshd@28-172.31.27.236:22-139.178.89.65:37112.service - OpenSSH per-connection server daemon (139.178.89.65:37112). Sep 9 23:47:00.897496 systemd-logind[1977]: Removed session 28. Sep 9 23:47:00.935672 containerd[1995]: time="2025-09-09T23:47:00.935520604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6npr,Uid:9970ac76-1d82-4d14-8f54-42d341a23ba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\"" Sep 9 23:47:00.941842 containerd[1995]: time="2025-09-09T23:47:00.941629504Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 23:47:00.959655 containerd[1995]: time="2025-09-09T23:47:00.959337388Z" level=info msg="Container a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:00.972702 containerd[1995]: time="2025-09-09T23:47:00.972628420Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3\"" Sep 9 23:47:00.973464 containerd[1995]: time="2025-09-09T23:47:00.973423924Z" level=info msg="StartContainer for \"a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3\"" Sep 9 23:47:00.977526 containerd[1995]: time="2025-09-09T23:47:00.977404360Z" level=info msg="connecting to shim a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3" address="unix:///run/containerd/s/1c73e76727b0af115913dd85bd877e42634d5ce1919a90cc721f1fbf4649a758" protocol=ttrpc version=3 Sep 9 23:47:01.009235 systemd[1]: Started cri-containerd-a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3.scope - libcontainer container a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3. Sep 9 23:47:01.069366 containerd[1995]: time="2025-09-09T23:47:01.069304105Z" level=info msg="StartContainer for \"a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3\" returns successfully" Sep 9 23:47:01.089145 systemd[1]: cri-containerd-a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3.scope: Deactivated successfully. Sep 9 23:47:01.099379 containerd[1995]: time="2025-09-09T23:47:01.099068305Z" level=info msg="received exit event container_id:\"a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3\" id:\"a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3\" pid:5323 exited_at:{seconds:1757461621 nanos:98295697}" Sep 9 23:47:01.099529 containerd[1995]: time="2025-09-09T23:47:01.099284065Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3\" id:\"a5316d83bf33f4c49d02cf3eb329751709772371e1c02b6f3b44cf47c3a427c3\" pid:5323 exited_at:{seconds:1757461621 nanos:98295697}" Sep 9 23:47:01.122646 sshd[5302]: Accepted publickey for core from 139.178.89.65 port 37112 ssh2: RSA SHA256:qHlHyIWOCFGyLN0DNo6M0sQy+OrgAlHw4s82lYsZXi8 Sep 9 23:47:01.127445 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:47:01.141545 systemd-logind[1977]: New session 29 of user core. Sep 9 23:47:01.150178 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 23:47:01.280925 containerd[1995]: time="2025-09-09T23:47:01.280178966Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 23:47:01.298167 containerd[1995]: time="2025-09-09T23:47:01.298097942Z" level=info msg="Container 3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:01.318409 containerd[1995]: time="2025-09-09T23:47:01.317315834Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9\"" Sep 9 23:47:01.320235 containerd[1995]: time="2025-09-09T23:47:01.320173370Z" level=info msg="StartContainer for \"3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9\"" Sep 9 23:47:01.330545 containerd[1995]: time="2025-09-09T23:47:01.330324098Z" level=info msg="connecting to shim 3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9" address="unix:///run/containerd/s/1c73e76727b0af115913dd85bd877e42634d5ce1919a90cc721f1fbf4649a758" protocol=ttrpc version=3 Sep 9 23:47:01.397174 systemd[1]: Started cri-containerd-3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9.scope - libcontainer container 3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9. Sep 9 23:47:01.494833 containerd[1995]: time="2025-09-09T23:47:01.494767755Z" level=info msg="StartContainer for \"3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9\" returns successfully" Sep 9 23:47:01.506079 systemd[1]: cri-containerd-3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9.scope: Deactivated successfully. Sep 9 23:47:01.510176 containerd[1995]: time="2025-09-09T23:47:01.510061863Z" level=info msg="received exit event container_id:\"3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9\" id:\"3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9\" pid:5377 exited_at:{seconds:1757461621 nanos:508668531}" Sep 9 23:47:01.510985 containerd[1995]: time="2025-09-09T23:47:01.510445911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9\" id:\"3affc02968ef207d9df01aaff3c7fbc6a595f2d96552434bb3e4ff511e7ac3f9\" pid:5377 exited_at:{seconds:1757461621 nanos:508668531}" Sep 9 23:47:02.287258 containerd[1995]: time="2025-09-09T23:47:02.287120403Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 23:47:02.317345 containerd[1995]: time="2025-09-09T23:47:02.317272143Z" level=info msg="Container 11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:02.339198 containerd[1995]: time="2025-09-09T23:47:02.339028299Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021\"" Sep 9 23:47:02.341410 containerd[1995]: time="2025-09-09T23:47:02.341014623Z" level=info msg="StartContainer for \"11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021\"" Sep 9 23:47:02.343898 containerd[1995]: time="2025-09-09T23:47:02.343806615Z" level=info msg="connecting to shim 11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021" address="unix:///run/containerd/s/1c73e76727b0af115913dd85bd877e42634d5ce1919a90cc721f1fbf4649a758" protocol=ttrpc version=3 Sep 9 23:47:02.396218 systemd[1]: Started cri-containerd-11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021.scope - libcontainer container 11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021. Sep 9 23:47:02.470083 systemd[1]: cri-containerd-11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021.scope: Deactivated successfully. Sep 9 23:47:02.474628 containerd[1995]: time="2025-09-09T23:47:02.474560068Z" level=info msg="received exit event container_id:\"11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021\" id:\"11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021\" pid:5420 exited_at:{seconds:1757461622 nanos:474093112}" Sep 9 23:47:02.475246 containerd[1995]: time="2025-09-09T23:47:02.475156036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021\" id:\"11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021\" pid:5420 exited_at:{seconds:1757461622 nanos:474093112}" Sep 9 23:47:02.477083 containerd[1995]: time="2025-09-09T23:47:02.477000148Z" level=info msg="StartContainer for \"11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021\" returns successfully" Sep 9 23:47:02.543793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11ca24566740b2e7ddfc7d3de70fa22ef17d2f7ba73bb8ed8846c09091638021-rootfs.mount: Deactivated successfully. Sep 9 23:47:02.957073 kubelet[3309]: E0909 23:47:02.957018 3309 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 23:47:03.294969 containerd[1995]: time="2025-09-09T23:47:03.294571060Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 23:47:03.316651 containerd[1995]: time="2025-09-09T23:47:03.316484800Z" level=info msg="Container abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:03.321558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1130073385.mount: Deactivated successfully. Sep 9 23:47:03.341876 containerd[1995]: time="2025-09-09T23:47:03.341800324Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae\"" Sep 9 23:47:03.343872 containerd[1995]: time="2025-09-09T23:47:03.343811392Z" level=info msg="StartContainer for \"abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae\"" Sep 9 23:47:03.345783 containerd[1995]: time="2025-09-09T23:47:03.345700480Z" level=info msg="connecting to shim abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae" address="unix:///run/containerd/s/1c73e76727b0af115913dd85bd877e42634d5ce1919a90cc721f1fbf4649a758" protocol=ttrpc version=3 Sep 9 23:47:03.392258 systemd[1]: Started cri-containerd-abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae.scope - libcontainer container abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae. Sep 9 23:47:03.448034 systemd[1]: cri-containerd-abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae.scope: Deactivated successfully. Sep 9 23:47:03.450540 containerd[1995]: time="2025-09-09T23:47:03.450423473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae\" id:\"abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae\" pid:5460 exited_at:{seconds:1757461623 nanos:448423265}" Sep 9 23:47:03.452334 containerd[1995]: time="2025-09-09T23:47:03.452258921Z" level=info msg="received exit event container_id:\"abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae\" id:\"abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae\" pid:5460 exited_at:{seconds:1757461623 nanos:448423265}" Sep 9 23:47:03.472310 containerd[1995]: time="2025-09-09T23:47:03.472052945Z" level=info msg="StartContainer for \"abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae\" returns successfully" Sep 9 23:47:03.499678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abbf6c38bdcf23aab1f95d2ffefa199bde485388d4d1e7d1a41b88eb252b1bae-rootfs.mount: Deactivated successfully. Sep 9 23:47:04.304420 containerd[1995]: time="2025-09-09T23:47:04.304356053Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 23:47:04.336054 containerd[1995]: time="2025-09-09T23:47:04.335985437Z" level=info msg="Container 9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:04.343166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561081446.mount: Deactivated successfully. Sep 9 23:47:04.362817 containerd[1995]: time="2025-09-09T23:47:04.362743613Z" level=info msg="CreateContainer within sandbox \"2a9458e43adbcda7027fb0629a4a7069000a881bb9c57e68d4fa12d6bbb5b65d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\"" Sep 9 23:47:04.364849 containerd[1995]: time="2025-09-09T23:47:04.364665389Z" level=info msg="StartContainer for \"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\"" Sep 9 23:47:04.366697 containerd[1995]: time="2025-09-09T23:47:04.366633005Z" level=info msg="connecting to shim 9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e" address="unix:///run/containerd/s/1c73e76727b0af115913dd85bd877e42634d5ce1919a90cc721f1fbf4649a758" protocol=ttrpc version=3 Sep 9 23:47:04.415176 systemd[1]: Started cri-containerd-9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e.scope - libcontainer container 9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e. Sep 9 23:47:04.490542 containerd[1995]: time="2025-09-09T23:47:04.490184394Z" level=info msg="StartContainer for \"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" returns successfully" Sep 9 23:47:04.660341 containerd[1995]: time="2025-09-09T23:47:04.660279895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" id:\"3a86769766fa3466dee719237c2189f9dec095aeb3aa3e0a2701f228e8fff19a\" pid:5529 exited_at:{seconds:1757461624 nanos:659293471}" Sep 9 23:47:05.387200 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 23:47:05.781812 containerd[1995]: time="2025-09-09T23:47:05.781384100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" id:\"88ad977b835c93c5ea989eb4fcdb5b3f237cb58f9748aa8726731e87f2c5532c\" pid:5607 exit_status:1 exited_at:{seconds:1757461625 nanos:780882308}" Sep 9 23:47:05.791488 kubelet[3309]: E0909 23:47:05.791268 3309 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49554->127.0.0.1:41441: write tcp 127.0.0.1:49554->127.0.0.1:41441: write: broken pipe Sep 9 23:47:08.141401 containerd[1995]: time="2025-09-09T23:47:08.141301172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" id:\"1d303bc1a90ff528276772f63485b5580982e66484caea102fdd6020f811e7f3\" pid:5713 exit_status:1 exited_at:{seconds:1757461628 nanos:140639396}" Sep 9 23:47:09.857785 systemd-networkd[1894]: lxc_health: Link UP Sep 9 23:47:09.867670 (udev-worker)[6037]: Network interface NamePolicy= disabled on kernel command line. Sep 9 23:47:09.876746 systemd-networkd[1894]: lxc_health: Gained carrier Sep 9 23:47:10.470096 containerd[1995]: time="2025-09-09T23:47:10.469801619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" id:\"f0d9ff6cad2c07711af5a2d5a8e09cc0831e1b6487ec175055263d0723c8cb08\" pid:6066 exited_at:{seconds:1757461630 nanos:469158503}" Sep 9 23:47:10.480922 kubelet[3309]: E0909 23:47:10.480268 3309 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49564->127.0.0.1:41441: write tcp 127.0.0.1:49564->127.0.0.1:41441: write: broken pipe Sep 9 23:47:10.796992 kubelet[3309]: I0909 23:47:10.796765 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w6npr" podStartSLOduration=10.796744825 podStartE2EDuration="10.796744825s" podCreationTimestamp="2025-09-09 23:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:47:05.357841122 +0000 UTC m=+127.935602928" watchObservedRunningTime="2025-09-09 23:47:10.796744825 +0000 UTC m=+133.374506619" Sep 9 23:47:11.080145 systemd-networkd[1894]: lxc_health: Gained IPv6LL Sep 9 23:47:12.732192 containerd[1995]: time="2025-09-09T23:47:12.731793459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" id:\"bc3a27b3be2472cb2b1f6b8cd566ed900e3460064b977c4a8fbf841baa074b0f\" pid:6092 exited_at:{seconds:1757461632 nanos:730833903}" Sep 9 23:47:12.743796 kubelet[3309]: E0909 23:47:12.742981 3309 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49574->127.0.0.1:41441: write tcp 127.0.0.1:49574->127.0.0.1:41441: write: broken pipe Sep 9 23:47:13.816304 ntpd[1971]: Listen normally on 15 lxc_health [fe80::c001:22ff:fe6e:1b95%14]:123 Sep 9 23:47:13.817280 ntpd[1971]: 9 Sep 23:47:13 ntpd[1971]: Listen normally on 15 lxc_health [fe80::c001:22ff:fe6e:1b95%14]:123 Sep 9 23:47:14.998778 containerd[1995]: time="2025-09-09T23:47:14.998707974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" id:\"36a4c76a1ab3683555c9fa773f3fd04d5b24af1ba3c5d70b8e8c8df0e6d0edf9\" pid:6117 exited_at:{seconds:1757461634 nanos:998228694}" Sep 9 23:47:17.239416 containerd[1995]: time="2025-09-09T23:47:17.239040053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9927471a52cf08c78d578742bcc3797a24af13712354e922770762d25a996c4e\" id:\"33fc26f20de746653e861274aaac13afc3dc5862843c61556b08467f230922d8\" pid:6144 exited_at:{seconds:1757461637 nanos:238252841}" Sep 9 23:47:17.278031 sshd[5359]: Connection closed by 139.178.89.65 port 37112 Sep 9 23:47:17.279309 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Sep 9 23:47:17.290426 systemd[1]: sshd@28-172.31.27.236:22-139.178.89.65:37112.service: Deactivated successfully. Sep 9 23:47:17.299409 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 23:47:17.302229 systemd-logind[1977]: Session 29 logged out. Waiting for processes to exit. Sep 9 23:47:17.305731 systemd-logind[1977]: Removed session 29. Sep 9 23:47:30.906074 kubelet[3309]: E0909 23:47:30.905981 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 23:47:32.277026 systemd[1]: cri-containerd-43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317.scope: Deactivated successfully. Sep 9 23:47:32.277588 systemd[1]: cri-containerd-43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317.scope: Consumed 4.320s CPU time, 55.3M memory peak. Sep 9 23:47:32.282789 containerd[1995]: time="2025-09-09T23:47:32.282401492Z" level=info msg="received exit event container_id:\"43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317\" id:\"43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317\" pid:3131 exit_status:1 exited_at:{seconds:1757461652 nanos:282022016}" Sep 9 23:47:32.282789 containerd[1995]: time="2025-09-09T23:47:32.282743648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317\" id:\"43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317\" pid:3131 exit_status:1 exited_at:{seconds:1757461652 nanos:282022016}" Sep 9 23:47:32.325077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317-rootfs.mount: Deactivated successfully. Sep 9 23:47:32.395743 kubelet[3309]: I0909 23:47:32.394671 3309 scope.go:117] "RemoveContainer" containerID="43b59eb879846be6e03cf10632f7a50060e436657b8e6769da2bf8052810c317" Sep 9 23:47:32.400433 containerd[1995]: time="2025-09-09T23:47:32.400355516Z" level=info msg="CreateContainer within sandbox \"08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 9 23:47:32.417921 containerd[1995]: time="2025-09-09T23:47:32.416236760Z" level=info msg="Container 45ddd0018351d7f94237e2bae89cc27bf1c9e42d39f5e7941c55c037814426e6: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:32.433741 containerd[1995]: time="2025-09-09T23:47:32.433676061Z" level=info msg="CreateContainer within sandbox \"08942ff9b50f8f64fa9d8552829148abd6ce407b593ca3aab28513f805194506\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"45ddd0018351d7f94237e2bae89cc27bf1c9e42d39f5e7941c55c037814426e6\"" Sep 9 23:47:32.434725 containerd[1995]: time="2025-09-09T23:47:32.434687769Z" level=info msg="StartContainer for \"45ddd0018351d7f94237e2bae89cc27bf1c9e42d39f5e7941c55c037814426e6\"" Sep 9 23:47:32.437125 containerd[1995]: time="2025-09-09T23:47:32.437023089Z" level=info msg="connecting to shim 45ddd0018351d7f94237e2bae89cc27bf1c9e42d39f5e7941c55c037814426e6" address="unix:///run/containerd/s/192ae742818d0fdc991d9b384f371a93766cca7ff84ca056a9ead5cf81628124" protocol=ttrpc version=3 Sep 9 23:47:32.484186 systemd[1]: Started cri-containerd-45ddd0018351d7f94237e2bae89cc27bf1c9e42d39f5e7941c55c037814426e6.scope - libcontainer container 45ddd0018351d7f94237e2bae89cc27bf1c9e42d39f5e7941c55c037814426e6. Sep 9 23:47:32.565750 containerd[1995]: time="2025-09-09T23:47:32.565687593Z" level=info msg="StartContainer for \"45ddd0018351d7f94237e2bae89cc27bf1c9e42d39f5e7941c55c037814426e6\" returns successfully" Sep 9 23:47:36.949482 systemd[1]: cri-containerd-3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996.scope: Deactivated successfully. Sep 9 23:47:36.951069 systemd[1]: cri-containerd-3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996.scope: Consumed 5.979s CPU time, 20.3M memory peak. Sep 9 23:47:36.954337 containerd[1995]: time="2025-09-09T23:47:36.954270471Z" level=info msg="received exit event container_id:\"3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996\" id:\"3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996\" pid:3149 exit_status:1 exited_at:{seconds:1757461656 nanos:953565627}" Sep 9 23:47:36.955405 containerd[1995]: time="2025-09-09T23:47:36.955200219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996\" id:\"3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996\" pid:3149 exit_status:1 exited_at:{seconds:1757461656 nanos:953565627}" Sep 9 23:47:36.994186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996-rootfs.mount: Deactivated successfully. Sep 9 23:47:37.416601 kubelet[3309]: I0909 23:47:37.416269 3309 scope.go:117] "RemoveContainer" containerID="3a4a41d41d21464483a938da4fd35a4230f9fdd426203589c901d7e94410f996" Sep 9 23:47:37.419831 containerd[1995]: time="2025-09-09T23:47:37.419775493Z" level=info msg="CreateContainer within sandbox \"a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 9 23:47:37.437207 containerd[1995]: time="2025-09-09T23:47:37.437125849Z" level=info msg="Container 712fce8f8978efcfa48ba3bcf1c781ed975090e472392380f6243330e354059e: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:47:37.457941 containerd[1995]: time="2025-09-09T23:47:37.457760977Z" level=info msg="CreateContainer within sandbox \"a23e878eb23632b65d1077780d2778d89c8588d44f700bae4fc35b41e4189148\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"712fce8f8978efcfa48ba3bcf1c781ed975090e472392380f6243330e354059e\"" Sep 9 23:47:37.459434 containerd[1995]: time="2025-09-09T23:47:37.459370585Z" level=info msg="StartContainer for \"712fce8f8978efcfa48ba3bcf1c781ed975090e472392380f6243330e354059e\"" Sep 9 23:47:37.462860 containerd[1995]: time="2025-09-09T23:47:37.462778261Z" level=info msg="connecting to shim 712fce8f8978efcfa48ba3bcf1c781ed975090e472392380f6243330e354059e" address="unix:///run/containerd/s/a75be31f2d12d7d7410d3a522fbdda4a533959673b80a74894fa2172997053ad" protocol=ttrpc version=3 Sep 9 23:47:37.501218 systemd[1]: Started cri-containerd-712fce8f8978efcfa48ba3bcf1c781ed975090e472392380f6243330e354059e.scope - libcontainer container 712fce8f8978efcfa48ba3bcf1c781ed975090e472392380f6243330e354059e. Sep 9 23:47:37.587060 containerd[1995]: time="2025-09-09T23:47:37.586996958Z" level=info msg="StartContainer for \"712fce8f8978efcfa48ba3bcf1c781ed975090e472392380f6243330e354059e\" returns successfully" Sep 9 23:47:40.907143 kubelet[3309]: E0909 23:47:40.906456 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 23:47:50.906866 kubelet[3309]: E0909 23:47:50.906773 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"