Sep 4 23:45:29.226121 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 23:45:29.226167 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:45:29.226192 kernel: KASLR disabled due to lack of seed Sep 4 23:45:29.226207 kernel: efi: EFI v2.7 by EDK II Sep 4 23:45:29.226223 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 4 23:45:29.226238 kernel: secureboot: Secure boot disabled Sep 4 23:45:29.226255 kernel: ACPI: Early table checksum verification disabled Sep 4 23:45:29.226270 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 23:45:29.226287 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 23:45:29.226302 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 23:45:29.226322 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 23:45:29.226337 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 23:45:29.226352 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 23:45:29.226368 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 23:45:29.226386 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 23:45:29.226406 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 23:45:29.226423 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 23:45:29.226439 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 23:45:29.226455 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 23:45:29.226471 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 23:45:29.226487 kernel: printk: bootconsole [uart0] enabled Sep 4 23:45:29.226503 kernel: NUMA: Failed to initialise from firmware Sep 4 23:45:29.226519 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 23:45:29.226535 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 23:45:29.226551 kernel: Zone ranges: Sep 4 23:45:29.226567 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 23:45:29.226604 kernel: DMA32 empty Sep 4 23:45:29.226627 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 23:45:29.226644 kernel: Movable zone start for each node Sep 4 23:45:29.226660 kernel: Early memory node ranges Sep 4 23:45:29.226677 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 23:45:29.226693 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 23:45:29.226709 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 23:45:29.226725 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 23:45:29.226741 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 23:45:29.226756 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 23:45:29.226772 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 23:45:29.226788 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 23:45:29.226810 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 23:45:29.226827 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 23:45:29.226851 kernel: psci: probing for conduit method from ACPI. Sep 4 23:45:29.226867 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 23:45:29.226885 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:45:29.226906 kernel: psci: Trusted OS migration not required Sep 4 23:45:29.226923 kernel: psci: SMC Calling Convention v1.1 Sep 4 23:45:29.226940 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 4 23:45:29.226957 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:45:29.226974 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:45:29.226991 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:45:29.227008 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:45:29.227025 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:45:29.227042 kernel: CPU features: detected: Spectre-v2 Sep 4 23:45:29.227058 kernel: CPU features: detected: Spectre-v3a Sep 4 23:45:29.227075 kernel: CPU features: detected: Spectre-BHB Sep 4 23:45:29.227097 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 23:45:29.227116 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 23:45:29.227135 kernel: alternatives: applying boot alternatives Sep 4 23:45:29.227156 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:45:29.227175 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:45:29.227193 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:45:29.227211 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:45:29.227228 kernel: Fallback order for Node 0: 0 Sep 4 23:45:29.227245 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 23:45:29.227263 kernel: Policy zone: Normal Sep 4 23:45:29.227280 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:45:29.227303 kernel: software IO TLB: area num 2. Sep 4 23:45:29.227321 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 23:45:29.227339 kernel: Memory: 3821112K/4030464K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 209352K reserved, 0K cma-reserved) Sep 4 23:45:29.227356 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:45:29.227374 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:45:29.227392 kernel: rcu: RCU event tracing is enabled. Sep 4 23:45:29.227410 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:45:29.227428 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:45:29.227446 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:45:29.227464 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:45:29.227481 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:45:29.227503 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:45:29.227521 kernel: GICv3: 96 SPIs implemented Sep 4 23:45:29.227538 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:45:29.227555 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:45:29.227573 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 23:45:29.227678 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 23:45:29.227705 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 23:45:29.227723 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 23:45:29.227740 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 4 23:45:29.227757 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 4 23:45:29.227774 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 23:45:29.227791 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 4 23:45:29.227815 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:45:29.227832 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 23:45:29.227849 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 23:45:29.227867 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 23:45:29.227884 kernel: Console: colour dummy device 80x25 Sep 4 23:45:29.227901 kernel: printk: console [tty1] enabled Sep 4 23:45:29.227918 kernel: ACPI: Core revision 20230628 Sep 4 23:45:29.227936 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 23:45:29.227953 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:45:29.227970 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:45:29.227992 kernel: landlock: Up and running. Sep 4 23:45:29.228009 kernel: SELinux: Initializing. Sep 4 23:45:29.228027 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:45:29.228044 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:45:29.228061 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:45:29.228079 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:45:29.228096 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:45:29.228113 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:45:29.228131 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 23:45:29.228153 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 23:45:29.228170 kernel: Remapping and enabling EFI services. Sep 4 23:45:29.228187 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:45:29.228204 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:45:29.228221 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 23:45:29.228238 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 4 23:45:29.228255 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 23:45:29.228272 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:45:29.228289 kernel: SMP: Total of 2 processors activated. Sep 4 23:45:29.228311 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:45:29.228329 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 23:45:29.228357 kernel: CPU features: detected: CRC32 instructions Sep 4 23:45:29.228379 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:45:29.228397 kernel: alternatives: applying system-wide alternatives Sep 4 23:45:29.228415 kernel: devtmpfs: initialized Sep 4 23:45:29.228433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:45:29.228451 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:45:29.228470 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:45:29.228492 kernel: SMBIOS 3.0.0 present. Sep 4 23:45:29.228510 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 23:45:29.228528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:45:29.228568 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:45:29.229613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:45:29.229835 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:45:29.230122 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:45:29.231622 kernel: audit: type=2000 audit(0.222:1): state=initialized audit_enabled=0 res=1 Sep 4 23:45:29.231655 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:45:29.231677 kernel: cpuidle: using governor menu Sep 4 23:45:29.231696 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:45:29.231715 kernel: ASID allocator initialised with 65536 entries Sep 4 23:45:29.231735 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:45:29.231754 kernel: Serial: AMBA PL011 UART driver Sep 4 23:45:29.231774 kernel: Modules: 17728 pages in range for non-PLT usage Sep 4 23:45:29.231793 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:45:29.231824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:45:29.231842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:45:29.231861 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:45:29.231879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:45:29.231897 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:45:29.231915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:45:29.231933 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:45:29.231951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:45:29.231969 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:45:29.231992 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:45:29.232011 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:45:29.232029 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:45:29.232047 kernel: ACPI: Interpreter enabled Sep 4 23:45:29.232066 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:45:29.232084 kernel: ACPI: MCFG table detected, 1 entries Sep 4 23:45:29.232103 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 23:45:29.232562 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:45:29.232839 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 23:45:29.233047 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 23:45:29.233241 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 23:45:29.233433 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 23:45:29.233458 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 23:45:29.233476 kernel: acpiphp: Slot [1] registered Sep 4 23:45:29.233495 kernel: acpiphp: Slot [2] registered Sep 4 23:45:29.233513 kernel: acpiphp: Slot [3] registered Sep 4 23:45:29.233531 kernel: acpiphp: Slot [4] registered Sep 4 23:45:29.233557 kernel: acpiphp: Slot [5] registered Sep 4 23:45:29.233575 kernel: acpiphp: Slot [6] registered Sep 4 23:45:29.234686 kernel: acpiphp: Slot [7] registered Sep 4 23:45:29.234712 kernel: acpiphp: Slot [8] registered Sep 4 23:45:29.234730 kernel: acpiphp: Slot [9] registered Sep 4 23:45:29.234748 kernel: acpiphp: Slot [10] registered Sep 4 23:45:29.234766 kernel: acpiphp: Slot [11] registered Sep 4 23:45:29.234784 kernel: acpiphp: Slot [12] registered Sep 4 23:45:29.234802 kernel: acpiphp: Slot [13] registered Sep 4 23:45:29.234828 kernel: acpiphp: Slot [14] registered Sep 4 23:45:29.234847 kernel: acpiphp: Slot [15] registered Sep 4 23:45:29.234864 kernel: acpiphp: Slot [16] registered Sep 4 23:45:29.234882 kernel: acpiphp: Slot [17] registered Sep 4 23:45:29.234900 kernel: acpiphp: Slot [18] registered Sep 4 23:45:29.234918 kernel: acpiphp: Slot [19] registered Sep 4 23:45:29.234936 kernel: acpiphp: Slot [20] registered Sep 4 23:45:29.234954 kernel: acpiphp: Slot [21] registered Sep 4 23:45:29.234972 kernel: acpiphp: Slot [22] registered Sep 4 23:45:29.234990 kernel: acpiphp: Slot [23] registered Sep 4 23:45:29.235013 kernel: acpiphp: Slot [24] registered Sep 4 23:45:29.235031 kernel: acpiphp: Slot [25] registered Sep 4 23:45:29.235049 kernel: acpiphp: Slot [26] registered Sep 4 23:45:29.235067 kernel: acpiphp: Slot [27] registered Sep 4 23:45:29.235084 kernel: acpiphp: Slot [28] registered Sep 4 23:45:29.235102 kernel: acpiphp: Slot [29] registered Sep 4 23:45:29.235120 kernel: acpiphp: Slot [30] registered Sep 4 23:45:29.235138 kernel: acpiphp: Slot [31] registered Sep 4 23:45:29.235156 kernel: PCI host bridge to bus 0000:00 Sep 4 23:45:29.235398 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 23:45:29.235706 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 23:45:29.235902 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 23:45:29.236093 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 23:45:29.236339 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 23:45:29.236693 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 23:45:29.236930 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 23:45:29.237148 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 23:45:29.237361 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 23:45:29.237570 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 23:45:29.237851 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 23:45:29.238051 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 23:45:29.238248 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 23:45:29.238453 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 23:45:29.238679 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 23:45:29.238890 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 23:45:29.239095 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 23:45:29.239299 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 23:45:29.239502 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 23:45:29.239772 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 23:45:29.240056 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 23:45:29.240248 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 23:45:29.240431 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 23:45:29.240456 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 23:45:29.240475 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 23:45:29.240493 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 23:45:29.240512 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 23:45:29.240530 kernel: iommu: Default domain type: Translated Sep 4 23:45:29.240575 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:45:29.243263 kernel: efivars: Registered efivars operations Sep 4 23:45:29.243296 kernel: vgaarb: loaded Sep 4 23:45:29.243314 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:45:29.243332 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:45:29.243351 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:45:29.243369 kernel: pnp: PnP ACPI init Sep 4 23:45:29.243680 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 23:45:29.243722 kernel: pnp: PnP ACPI: found 1 devices Sep 4 23:45:29.243742 kernel: NET: Registered PF_INET protocol family Sep 4 23:45:29.243761 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:45:29.243780 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:45:29.243798 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:45:29.243817 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:45:29.243835 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:45:29.243854 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:45:29.243872 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:45:29.243895 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:45:29.243915 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:45:29.243933 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:45:29.243951 kernel: kvm [1]: HYP mode not available Sep 4 23:45:29.243969 kernel: Initialise system trusted keyrings Sep 4 23:45:29.243988 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:45:29.244006 kernel: Key type asymmetric registered Sep 4 23:45:29.244024 kernel: Asymmetric key parser 'x509' registered Sep 4 23:45:29.244042 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:45:29.244064 kernel: io scheduler mq-deadline registered Sep 4 23:45:29.244083 kernel: io scheduler kyber registered Sep 4 23:45:29.244101 kernel: io scheduler bfq registered Sep 4 23:45:29.244352 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 23:45:29.244384 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 23:45:29.244404 kernel: ACPI: button: Power Button [PWRB] Sep 4 23:45:29.244424 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 23:45:29.244442 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 23:45:29.244461 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:45:29.244488 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 23:45:29.244810 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 23:45:29.244842 kernel: printk: console [ttyS0] disabled Sep 4 23:45:29.244862 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 23:45:29.244880 kernel: printk: console [ttyS0] enabled Sep 4 23:45:29.244899 kernel: printk: bootconsole [uart0] disabled Sep 4 23:45:29.244917 kernel: thunder_xcv, ver 1.0 Sep 4 23:45:29.244936 kernel: thunder_bgx, ver 1.0 Sep 4 23:45:29.244955 kernel: nicpf, ver 1.0 Sep 4 23:45:29.244981 kernel: nicvf, ver 1.0 Sep 4 23:45:29.245190 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:45:29.245381 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:45:28 UTC (1757029528) Sep 4 23:45:29.245407 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:45:29.245426 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 23:45:29.245445 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:45:29.245463 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:45:29.245481 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:45:29.245506 kernel: Segment Routing with IPv6 Sep 4 23:45:29.245525 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:45:29.245543 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:45:29.245561 kernel: Key type dns_resolver registered Sep 4 23:45:29.245579 kernel: registered taskstats version 1 Sep 4 23:45:29.245649 kernel: Loading compiled-in X.509 certificates Sep 4 23:45:29.245671 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:45:29.245689 kernel: Key type .fscrypt registered Sep 4 23:45:29.245708 kernel: Key type fscrypt-provisioning registered Sep 4 23:45:29.245733 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:45:29.245752 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:45:29.245770 kernel: ima: No architecture policies found Sep 4 23:45:29.245788 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:45:29.245807 kernel: clk: Disabling unused clocks Sep 4 23:45:29.245825 kernel: Freeing unused kernel memory: 38400K Sep 4 23:45:29.245843 kernel: Run /init as init process Sep 4 23:45:29.245861 kernel: with arguments: Sep 4 23:45:29.245879 kernel: /init Sep 4 23:45:29.245902 kernel: with environment: Sep 4 23:45:29.245920 kernel: HOME=/ Sep 4 23:45:29.245939 kernel: TERM=linux Sep 4 23:45:29.245957 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:45:29.245977 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:45:29.246003 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:45:29.246024 systemd[1]: Detected virtualization amazon. Sep 4 23:45:29.246048 systemd[1]: Detected architecture arm64. Sep 4 23:45:29.246069 systemd[1]: Running in initrd. Sep 4 23:45:29.246089 systemd[1]: No hostname configured, using default hostname. Sep 4 23:45:29.246109 systemd[1]: Hostname set to . Sep 4 23:45:29.246129 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:45:29.246149 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:45:29.246170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:29.246190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:29.246211 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:45:29.246237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:45:29.246258 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:45:29.246280 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:45:29.246302 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:45:29.246324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:45:29.246346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:29.246372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:29.246393 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:45:29.246413 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:45:29.246433 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:45:29.246453 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:45:29.246473 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:45:29.246493 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:45:29.246513 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:45:29.246533 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:45:29.246558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:29.246578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:29.246661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:29.246685 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:45:29.246705 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:45:29.246726 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:45:29.246746 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:45:29.246766 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:45:29.246786 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:45:29.246813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:45:29.246834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:29.246854 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:45:29.246874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:29.246896 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:45:29.246921 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:45:29.246988 systemd-journald[252]: Collecting audit messages is disabled. Sep 4 23:45:29.247035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:29.247062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:45:29.247083 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:45:29.247103 kernel: Bridge firewalling registered Sep 4 23:45:29.247123 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:29.247143 systemd-journald[252]: Journal started Sep 4 23:45:29.247181 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2490f27f999de2ec76a6320929fd62) is 8M, max 75.3M, 67.3M free. Sep 4 23:45:29.192654 systemd-modules-load[253]: Inserted module 'overlay' Sep 4 23:45:29.240997 systemd-modules-load[253]: Inserted module 'br_netfilter' Sep 4 23:45:29.262020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:45:29.263102 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:45:29.273198 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:29.291330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:45:29.306423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:45:29.309351 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:29.319008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:29.337195 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:45:29.344822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:29.357274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:29.370881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:45:29.397497 dracut-cmdline[288]: dracut-dracut-053 Sep 4 23:45:29.408723 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:45:29.457511 systemd-resolved[291]: Positive Trust Anchors: Sep 4 23:45:29.457539 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:45:29.457630 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:45:29.566629 kernel: SCSI subsystem initialized Sep 4 23:45:29.574624 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:45:29.586632 kernel: iscsi: registered transport (tcp) Sep 4 23:45:29.608761 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:45:29.608856 kernel: QLogic iSCSI HBA Driver Sep 4 23:45:29.689639 kernel: random: crng init done Sep 4 23:45:29.690158 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 4 23:45:29.695412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:45:29.703398 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:29.713753 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:45:29.724956 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:45:29.758975 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:45:29.759055 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:45:29.759082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:45:29.826638 kernel: raid6: neonx8 gen() 6585 MB/s Sep 4 23:45:29.843626 kernel: raid6: neonx4 gen() 6534 MB/s Sep 4 23:45:29.860628 kernel: raid6: neonx2 gen() 5422 MB/s Sep 4 23:45:29.877632 kernel: raid6: neonx1 gen() 3943 MB/s Sep 4 23:45:29.894625 kernel: raid6: int64x8 gen() 3635 MB/s Sep 4 23:45:29.911624 kernel: raid6: int64x4 gen() 3725 MB/s Sep 4 23:45:29.928623 kernel: raid6: int64x2 gen() 3610 MB/s Sep 4 23:45:29.946607 kernel: raid6: int64x1 gen() 2758 MB/s Sep 4 23:45:29.946665 kernel: raid6: using algorithm neonx8 gen() 6585 MB/s Sep 4 23:45:29.964578 kernel: raid6: .... xor() 4719 MB/s, rmw enabled Sep 4 23:45:29.964642 kernel: raid6: using neon recovery algorithm Sep 4 23:45:29.973208 kernel: xor: measuring software checksum speed Sep 4 23:45:29.973263 kernel: 8regs : 12935 MB/sec Sep 4 23:45:29.974374 kernel: 32regs : 13033 MB/sec Sep 4 23:45:29.976635 kernel: arm64_neon : 9000 MB/sec Sep 4 23:45:29.976670 kernel: xor: using function: 32regs (13033 MB/sec) Sep 4 23:45:30.059643 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:45:30.078517 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:45:30.089932 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:30.137775 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 4 23:45:30.149112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:30.164324 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:45:30.197025 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 4 23:45:30.253670 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:45:30.264931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:45:30.392135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:30.407160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:45:30.438011 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:45:30.445931 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:45:30.448828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:30.451984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:45:30.464988 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:45:30.504667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:45:30.581177 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 23:45:30.581240 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 23:45:30.587207 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 23:45:30.587530 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 23:45:30.596635 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:41:a2:8c:76:7d Sep 4 23:45:30.613196 (udev-worker)[541]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:45:30.634546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:45:30.634842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:30.638030 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:30.640893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:45:30.641145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:30.656761 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:30.667673 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 23:45:30.667751 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 23:45:30.669023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:30.674552 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:30.685097 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 23:45:30.694064 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:45:30.694131 kernel: GPT:9289727 != 16777215 Sep 4 23:45:30.694166 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:45:30.696146 kernel: GPT:9289727 != 16777215 Sep 4 23:45:30.696185 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:45:30.697219 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:45:30.699489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:30.712973 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:45:30.746363 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:30.832663 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (526) Sep 4 23:45:30.847629 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (548) Sep 4 23:45:30.876620 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 23:45:30.964467 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 23:45:30.987516 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 23:45:30.993967 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 23:45:31.020402 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:45:31.033926 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:45:31.054765 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:45:31.056825 disk-uuid[662]: Primary Header is updated. Sep 4 23:45:31.056825 disk-uuid[662]: Secondary Entries is updated. Sep 4 23:45:31.056825 disk-uuid[662]: Secondary Header is updated. Sep 4 23:45:32.094645 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:45:32.098718 disk-uuid[663]: The operation has completed successfully. Sep 4 23:45:32.287353 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:45:32.288055 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:45:32.373901 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:45:32.396090 sh[838]: Success Sep 4 23:45:32.424789 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:45:32.535550 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:45:32.554812 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:45:32.569399 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:45:32.590544 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:45:32.590630 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:32.590660 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:45:32.593753 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:45:32.593796 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:45:32.709642 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 23:45:32.733922 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:45:32.738481 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:45:32.748884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:45:32.757995 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:45:32.808547 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:32.808637 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:32.808678 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:45:32.831659 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:45:32.840684 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:32.844758 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:45:32.860989 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:45:32.924957 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:45:32.937926 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:45:32.997369 systemd-networkd[1027]: lo: Link UP Sep 4 23:45:32.997391 systemd-networkd[1027]: lo: Gained carrier Sep 4 23:45:33.000188 systemd-networkd[1027]: Enumeration completed Sep 4 23:45:33.001162 systemd-networkd[1027]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:33.001170 systemd-networkd[1027]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:33.002880 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:45:33.003993 systemd-networkd[1027]: eth0: Link UP Sep 4 23:45:33.004001 systemd-networkd[1027]: eth0: Gained carrier Sep 4 23:45:33.004018 systemd-networkd[1027]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:33.014713 systemd[1]: Reached target network.target - Network. Sep 4 23:45:33.038681 systemd-networkd[1027]: eth0: DHCPv4 address 172.31.23.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:45:33.301210 ignition[974]: Ignition 2.20.0 Sep 4 23:45:33.301241 ignition[974]: Stage: fetch-offline Sep 4 23:45:33.305131 ignition[974]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:33.305174 ignition[974]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:33.307562 ignition[974]: Ignition finished successfully Sep 4 23:45:33.309186 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:45:33.324007 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:45:33.349643 ignition[1038]: Ignition 2.20.0 Sep 4 23:45:33.349671 ignition[1038]: Stage: fetch Sep 4 23:45:33.351376 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:33.351402 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:33.352759 ignition[1038]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:33.373568 ignition[1038]: PUT result: OK Sep 4 23:45:33.378155 ignition[1038]: parsed url from cmdline: "" Sep 4 23:45:33.378285 ignition[1038]: no config URL provided Sep 4 23:45:33.378465 ignition[1038]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:45:33.378493 ignition[1038]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:45:33.378528 ignition[1038]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:33.382721 ignition[1038]: PUT result: OK Sep 4 23:45:33.384682 ignition[1038]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 23:45:33.390751 ignition[1038]: GET result: OK Sep 4 23:45:33.392194 ignition[1038]: parsing config with SHA512: 7d7bb805a5e000e5ca87edd9ad233a33b1b822acc4add56053442f635b880c73c0e4cd74e48b5fa7bfe171ec98a8c94bf2de36fd742514b417a2b992dc1c479c Sep 4 23:45:33.401062 unknown[1038]: fetched base config from "system" Sep 4 23:45:33.401749 ignition[1038]: fetch: fetch complete Sep 4 23:45:33.401084 unknown[1038]: fetched base config from "system" Sep 4 23:45:33.401761 ignition[1038]: fetch: fetch passed Sep 4 23:45:33.401098 unknown[1038]: fetched user config from "aws" Sep 4 23:45:33.401846 ignition[1038]: Ignition finished successfully Sep 4 23:45:33.407127 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:45:33.420897 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:45:33.451502 ignition[1044]: Ignition 2.20.0 Sep 4 23:45:33.451530 ignition[1044]: Stage: kargs Sep 4 23:45:33.453314 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:33.453341 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:33.453516 ignition[1044]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:33.456965 ignition[1044]: PUT result: OK Sep 4 23:45:33.462272 ignition[1044]: kargs: kargs passed Sep 4 23:45:33.468499 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:45:33.462365 ignition[1044]: Ignition finished successfully Sep 4 23:45:33.487565 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:45:33.510273 ignition[1051]: Ignition 2.20.0 Sep 4 23:45:33.510295 ignition[1051]: Stage: disks Sep 4 23:45:33.510931 ignition[1051]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:33.510968 ignition[1051]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:33.511121 ignition[1051]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:33.514267 ignition[1051]: PUT result: OK Sep 4 23:45:33.525129 ignition[1051]: disks: disks passed Sep 4 23:45:33.525233 ignition[1051]: Ignition finished successfully Sep 4 23:45:33.530277 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:45:33.532945 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:45:33.535433 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:45:33.539768 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:45:33.542141 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:33.550261 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:33.565872 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:45:33.617114 systemd-fsck[1059]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:45:33.625249 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:45:33.634775 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:45:33.730624 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:45:33.732137 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:45:33.735392 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:45:33.751750 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:45:33.755770 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:45:33.762434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:45:33.762532 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:45:33.762584 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:45:33.785242 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:45:33.795995 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:45:33.812634 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1078) Sep 4 23:45:33.817127 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:33.817182 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:33.817209 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:45:33.829629 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:45:33.832557 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:45:34.271707 initrd-setup-root[1102]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:45:34.314195 initrd-setup-root[1109]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:45:34.323617 initrd-setup-root[1116]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:45:34.328747 systemd-networkd[1027]: eth0: Gained IPv6LL Sep 4 23:45:34.334538 initrd-setup-root[1123]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:45:34.666060 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:45:34.683240 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:45:34.688968 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:45:34.705076 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:45:34.709210 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:34.750644 ignition[1191]: INFO : Ignition 2.20.0 Sep 4 23:45:34.750644 ignition[1191]: INFO : Stage: mount Sep 4 23:45:34.761127 ignition[1191]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:34.761127 ignition[1191]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:34.761127 ignition[1191]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:34.761127 ignition[1191]: INFO : PUT result: OK Sep 4 23:45:34.755649 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:45:34.784798 ignition[1191]: INFO : mount: mount passed Sep 4 23:45:34.784798 ignition[1191]: INFO : Ignition finished successfully Sep 4 23:45:34.770821 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:45:34.795298 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:45:34.819977 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:45:34.842877 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1204) Sep 4 23:45:34.842938 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:45:34.844625 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:45:34.847520 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:45:34.852629 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:45:34.855915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:45:34.890580 ignition[1221]: INFO : Ignition 2.20.0 Sep 4 23:45:34.890580 ignition[1221]: INFO : Stage: files Sep 4 23:45:34.895468 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:34.895468 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:34.895468 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:34.895468 ignition[1221]: INFO : PUT result: OK Sep 4 23:45:34.905199 ignition[1221]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:45:34.907828 ignition[1221]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:45:34.907828 ignition[1221]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:45:34.915362 ignition[1221]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:45:34.915362 ignition[1221]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:45:34.921686 unknown[1221]: wrote ssh authorized keys file for user: core Sep 4 23:45:34.924221 ignition[1221]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:45:34.935856 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 23:45:34.935856 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 23:45:35.028146 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:45:35.456561 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 23:45:35.456561 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:45:35.456561 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:45:35.529268 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:45:35.659085 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:45:35.659085 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:45:35.666945 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 4 23:45:36.077249 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:45:36.426320 ignition[1221]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 4 23:45:36.426320 ignition[1221]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:45:36.434745 ignition[1221]: INFO : files: files passed Sep 4 23:45:36.434745 ignition[1221]: INFO : Ignition finished successfully Sep 4 23:45:36.435079 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:45:36.456717 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:45:36.478798 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:45:36.497436 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:45:36.500338 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:45:36.513060 initrd-setup-root-after-ignition[1250]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:36.516822 initrd-setup-root-after-ignition[1250]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:36.520554 initrd-setup-root-after-ignition[1254]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:45:36.526771 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:36.531012 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:45:36.544893 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:45:36.591316 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:45:36.591509 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:45:36.595013 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:45:36.598371 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:45:36.605207 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:45:36.616981 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:45:36.651643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:36.666963 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:45:36.692401 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:36.695186 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:36.699009 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:45:36.707374 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:45:36.707841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:45:36.715343 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:45:36.718195 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:45:36.724750 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:45:36.727232 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:45:36.730898 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:45:36.740140 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:45:36.742654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:45:36.750211 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:45:36.754719 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:45:36.759290 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:45:36.761272 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:45:36.761510 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:45:36.770614 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:36.773527 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:36.776403 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:45:36.776660 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:36.789227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:45:36.789464 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:45:36.792293 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:45:36.792659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:45:36.804565 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:45:36.804824 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:45:36.817130 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:45:36.821846 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:45:36.822384 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:36.835114 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:45:36.837568 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:45:36.837976 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:36.850036 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:45:36.850965 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:45:36.876809 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:45:36.879831 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:45:36.887724 ignition[1274]: INFO : Ignition 2.20.0 Sep 4 23:45:36.887724 ignition[1274]: INFO : Stage: umount Sep 4 23:45:36.893523 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:45:36.895890 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:45:36.900449 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:45:36.908681 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:45:36.910851 ignition[1274]: INFO : PUT result: OK Sep 4 23:45:36.916003 ignition[1274]: INFO : umount: umount passed Sep 4 23:45:36.918731 ignition[1274]: INFO : Ignition finished successfully Sep 4 23:45:36.919991 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:45:36.920236 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:45:36.928342 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:45:36.928508 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:45:36.930944 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:45:36.931042 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:45:36.933740 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:45:36.933821 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:45:36.936286 systemd[1]: Stopped target network.target - Network. Sep 4 23:45:36.938497 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:45:36.938619 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:45:36.961561 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:45:36.963861 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:45:36.967879 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:36.971182 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:45:36.974140 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:45:36.979723 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:45:36.979806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:45:36.990267 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:45:36.991003 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:45:36.994693 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:45:36.994812 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:45:36.997128 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:45:36.997660 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:45:37.002432 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:45:37.002664 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:45:37.029468 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:45:37.029948 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:45:37.036912 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:45:37.038664 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:45:37.046747 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:45:37.049630 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:45:37.051665 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:45:37.053867 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:45:37.056109 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:45:37.056196 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:37.060740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:45:37.060844 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:45:37.079808 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:45:37.082034 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:45:37.082168 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:45:37.085291 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:45:37.085392 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:37.092715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:45:37.092805 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:37.108299 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:45:37.108401 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:37.113568 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:37.124329 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:45:37.127464 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:37.143026 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:45:37.143556 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:37.153883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:45:37.153969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:37.156827 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:45:37.156952 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:37.160678 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:45:37.160782 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:45:37.163648 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:45:37.163746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:45:37.168428 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:45:37.168544 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:45:37.198824 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:45:37.201263 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:45:37.201377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:37.207110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:45:37.207213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:37.216899 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:45:37.217027 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:45:37.223626 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:45:37.223883 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:45:37.248932 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:45:37.249337 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:45:37.257359 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:45:37.267935 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:45:37.285276 systemd[1]: Switching root. Sep 4 23:45:37.348324 systemd-journald[252]: Journal stopped Sep 4 23:45:39.742324 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Sep 4 23:45:39.742468 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:45:39.742513 kernel: SELinux: policy capability open_perms=1 Sep 4 23:45:39.742553 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:45:39.742611 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:45:39.742648 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:45:39.742680 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:45:39.742709 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:45:39.742739 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:45:39.742781 kernel: audit: type=1403 audit(1757029537.749:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:45:39.742818 systemd[1]: Successfully loaded SELinux policy in 93.661ms. Sep 4 23:45:39.742872 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.111ms. Sep 4 23:45:39.742908 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:45:39.742944 systemd[1]: Detected virtualization amazon. Sep 4 23:45:39.742977 systemd[1]: Detected architecture arm64. Sep 4 23:45:39.743010 systemd[1]: Detected first boot. Sep 4 23:45:39.743057 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:45:39.743091 zram_generator::config[1320]: No configuration found. Sep 4 23:45:39.743126 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:45:39.743156 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:45:39.743189 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:45:39.743236 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:45:39.743271 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:45:39.743302 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:39.743337 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:45:39.743369 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:45:39.743400 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:45:39.743432 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:45:39.743462 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:45:39.743493 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:45:39.743528 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:45:39.743560 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:45:39.747665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:45:39.747749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:45:39.747782 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:45:39.747828 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:45:39.747882 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:45:39.747923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:45:39.747960 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:45:39.748009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:45:39.748045 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:45:39.748092 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:45:39.748130 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:45:39.748163 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:45:39.748197 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:45:39.748232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:45:39.748270 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:45:39.748313 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:45:39.748345 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:45:39.748376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:45:39.748406 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:45:39.748439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:45:39.748497 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:45:39.748533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:45:39.748565 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:45:39.748634 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:45:39.748677 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:45:39.748707 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:45:39.748739 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:45:39.748771 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:45:39.748801 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:45:39.748832 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:45:39.748862 systemd[1]: Reached target machines.target - Containers. Sep 4 23:45:39.748892 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:45:39.748927 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:39.748957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:45:39.748987 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:45:39.749018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:39.749047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:39.749076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:39.749107 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:45:39.749137 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:39.749182 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:45:39.749219 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:45:39.749249 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:45:39.749280 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:45:39.749313 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:45:39.749344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:39.749376 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:45:39.749406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:45:39.749435 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:45:39.749470 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:45:39.749502 kernel: loop: module loaded Sep 4 23:45:39.749535 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:45:39.749566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:45:39.755260 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:45:39.755325 systemd[1]: Stopped verity-setup.service. Sep 4 23:45:39.755359 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:45:39.755393 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:45:39.755423 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:45:39.755453 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:45:39.755484 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:45:39.755516 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:45:39.755546 kernel: fuse: init (API version 7.39) Sep 4 23:45:39.755581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:45:39.755639 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:45:39.755672 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:45:39.755702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:39.755732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:39.755762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:39.755796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:39.755828 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:45:39.755858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:45:39.755887 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:39.755917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:39.755947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:45:39.755979 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:45:39.756053 systemd-journald[1406]: Collecting audit messages is disabled. Sep 4 23:45:39.756108 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:45:39.756139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:39.756171 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:45:39.756201 systemd-journald[1406]: Journal started Sep 4 23:45:39.756249 systemd-journald[1406]: Runtime Journal (/run/log/journal/ec2490f27f999de2ec76a6320929fd62) is 8M, max 75.3M, 67.3M free. Sep 4 23:45:39.088856 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:45:39.101342 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 23:45:39.102199 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:45:39.766976 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:45:39.767646 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:45:39.771642 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:45:39.775831 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:45:39.781644 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:45:39.795655 kernel: ACPI: bus type drm_connector registered Sep 4 23:45:39.800170 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:39.803438 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:39.830088 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:45:39.832883 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:45:39.832932 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:45:39.838759 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:45:39.857106 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:45:39.871900 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:45:39.874989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:39.878903 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:45:39.886894 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:45:39.889564 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:39.893965 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:45:39.900843 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:45:39.908698 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:45:39.913442 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:45:39.917721 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:45:39.929708 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:45:39.950195 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:45:39.955221 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:45:39.969781 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:45:39.986174 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:45:40.003629 kernel: loop0: detected capacity change from 0 to 123192 Sep 4 23:45:40.009443 systemd-journald[1406]: Time spent on flushing to /var/log/journal/ec2490f27f999de2ec76a6320929fd62 is 50.867ms for 926 entries. Sep 4 23:45:40.009443 systemd-journald[1406]: System Journal (/var/log/journal/ec2490f27f999de2ec76a6320929fd62) is 8M, max 195.6M, 187.6M free. Sep 4 23:45:40.075531 systemd-journald[1406]: Received client request to flush runtime journal. Sep 4 23:45:40.049373 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:45:40.079684 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:45:40.104859 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:45:40.125938 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:45:40.133621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:45:40.143779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:45:40.170118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:45:40.187890 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:45:40.194689 kernel: loop1: detected capacity change from 0 to 113512 Sep 4 23:45:40.234995 udevadm[1476]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 23:45:40.236887 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Sep 4 23:45:40.236918 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Sep 4 23:45:40.257710 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:45:40.300631 kernel: loop2: detected capacity change from 0 to 203944 Sep 4 23:45:40.355666 kernel: loop3: detected capacity change from 0 to 53784 Sep 4 23:45:40.395648 kernel: loop4: detected capacity change from 0 to 123192 Sep 4 23:45:40.425151 kernel: loop5: detected capacity change from 0 to 113512 Sep 4 23:45:40.439696 kernel: loop6: detected capacity change from 0 to 203944 Sep 4 23:45:40.475093 kernel: loop7: detected capacity change from 0 to 53784 Sep 4 23:45:40.506815 (sd-merge)[1481]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 23:45:40.508410 (sd-merge)[1481]: Merged extensions into '/usr'. Sep 4 23:45:40.517735 systemd[1]: Reload requested from client PID 1457 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:45:40.518326 systemd[1]: Reloading... Sep 4 23:45:40.714671 zram_generator::config[1508]: No configuration found. Sep 4 23:45:41.047028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:41.230382 systemd[1]: Reloading finished in 710 ms. Sep 4 23:45:41.259042 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:45:41.270026 systemd[1]: Starting ensure-sysext.service... Sep 4 23:45:41.277139 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:45:41.307669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:45:41.324104 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:45:41.335817 systemd[1]: Reload requested from client PID 1560 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:45:41.335852 systemd[1]: Reloading... Sep 4 23:45:41.371587 systemd-tmpfiles[1561]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:45:41.372176 systemd-tmpfiles[1561]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:45:41.378379 systemd-tmpfiles[1561]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:45:41.385126 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Sep 4 23:45:41.387111 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Sep 4 23:45:41.415090 systemd-tmpfiles[1561]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:41.415118 systemd-tmpfiles[1561]: Skipping /boot Sep 4 23:45:41.424768 systemd-udevd[1564]: Using default interface naming scheme 'v255'. Sep 4 23:45:41.475533 systemd-tmpfiles[1561]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:45:41.475568 systemd-tmpfiles[1561]: Skipping /boot Sep 4 23:45:41.589656 zram_generator::config[1605]: No configuration found. Sep 4 23:45:41.616665 ldconfig[1452]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:45:41.785808 (udev-worker)[1610]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:45:41.951463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:42.076655 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1610) Sep 4 23:45:42.189793 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:45:42.190574 systemd[1]: Reloading finished in 854 ms. Sep 4 23:45:42.207022 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:45:42.213688 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:45:42.246063 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:45:42.305493 systemd[1]: Finished ensure-sysext.service. Sep 4 23:45:42.356697 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:45:42.375272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:45:42.391873 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:42.401957 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:45:42.407468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:45:42.414914 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:45:42.424767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:45:42.438906 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:45:42.447689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:45:42.452912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:45:42.456902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:45:42.464928 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:45:42.467583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:45:42.471961 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:45:42.476622 lvm[1764]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:42.482751 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:45:42.495482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:45:42.498812 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:45:42.504783 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:45:42.511787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:45:42.517435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:45:42.517903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:45:42.529991 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:45:42.531689 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:45:42.566842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:45:42.567556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:45:42.571167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:45:42.591089 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:45:42.615941 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:45:42.616885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:45:42.621027 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:45:42.627710 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:45:42.645180 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:45:42.647874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:45:42.656059 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:45:42.673265 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:45:42.682062 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:45:42.686707 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:45:42.706842 lvm[1797]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:45:42.747362 augenrules[1807]: No rules Sep 4 23:45:42.750164 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:42.750913 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:42.759767 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:45:42.777343 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:45:42.784139 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:45:42.788755 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:45:42.806670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:45:42.826520 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:45:42.947342 systemd-networkd[1776]: lo: Link UP Sep 4 23:45:42.947666 systemd-networkd[1776]: lo: Gained carrier Sep 4 23:45:42.951459 systemd-networkd[1776]: Enumeration completed Sep 4 23:45:42.951913 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:45:42.952989 systemd-networkd[1776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:42.953178 systemd-networkd[1776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:45:42.961896 systemd-networkd[1776]: eth0: Link UP Sep 4 23:45:42.962311 systemd-networkd[1776]: eth0: Gained carrier Sep 4 23:45:42.962431 systemd-networkd[1776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:45:42.962449 systemd-resolved[1777]: Positive Trust Anchors: Sep 4 23:45:42.962470 systemd-resolved[1777]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:45:42.962532 systemd-resolved[1777]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:45:42.966129 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:45:42.978763 systemd-networkd[1776]: eth0: DHCPv4 address 172.31.23.55/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:45:42.979069 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:45:42.995077 systemd-resolved[1777]: Defaulting to hostname 'linux'. Sep 4 23:45:43.001791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:45:43.004837 systemd[1]: Reached target network.target - Network. Sep 4 23:45:43.006893 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:45:43.009633 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:45:43.012147 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:45:43.014956 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:45:43.018009 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:45:43.020558 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:45:43.023746 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:45:43.026872 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:45:43.026926 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:45:43.029078 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:45:43.032525 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:45:43.037825 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:45:43.044953 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:45:43.048223 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:45:43.051109 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:45:43.058017 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:45:43.061309 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:45:43.065480 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:45:43.068766 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:45:43.072538 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:45:43.075255 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:45:43.077551 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:43.077656 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:45:43.085814 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:45:43.092236 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:45:43.111249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:45:43.116880 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:45:43.126944 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:45:43.130782 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:45:43.145521 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:45:43.153527 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 23:45:43.160866 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:45:43.168388 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 23:45:43.182624 jq[1835]: false Sep 4 23:45:43.194954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:45:43.199656 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:45:43.207813 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:45:43.211719 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:45:43.212672 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:45:43.216377 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:45:43.221807 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:45:43.231327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:45:43.231865 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:45:43.288301 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:45:43.290719 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:45:43.304455 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:45:43.303786 dbus-daemon[1834]: [system] SELinux support is enabled Sep 4 23:45:43.314441 dbus-daemon[1834]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1776 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 23:45:43.314847 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:45:43.314895 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:45:43.317826 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:45:43.317860 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:45:43.337179 extend-filesystems[1836]: Found loop4 Sep 4 23:45:43.337179 extend-filesystems[1836]: Found loop5 Sep 4 23:45:43.337179 extend-filesystems[1836]: Found loop6 Sep 4 23:45:43.337179 extend-filesystems[1836]: Found loop7 Sep 4 23:45:43.337179 extend-filesystems[1836]: Found nvme0n1 Sep 4 23:45:43.337179 extend-filesystems[1836]: Found nvme0n1p1 Sep 4 23:45:43.337179 extend-filesystems[1836]: Found nvme0n1p2 Sep 4 23:45:43.337179 extend-filesystems[1836]: Found nvme0n1p3 Sep 4 23:45:43.390058 extend-filesystems[1836]: Found usr Sep 4 23:45:43.390058 extend-filesystems[1836]: Found nvme0n1p4 Sep 4 23:45:43.390058 extend-filesystems[1836]: Found nvme0n1p6 Sep 4 23:45:43.390058 extend-filesystems[1836]: Found nvme0n1p7 Sep 4 23:45:43.390058 extend-filesystems[1836]: Found nvme0n1p9 Sep 4 23:45:43.390058 extend-filesystems[1836]: Checking size of /dev/nvme0n1p9 Sep 4 23:45:43.433936 jq[1849]: true Sep 4 23:45:43.354992 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 23:45:43.340874 dbus-daemon[1834]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 23:45:43.380322 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:45:43.380803 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:45:43.408940 (ntainerd)[1864]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:45:43.445526 extend-filesystems[1836]: Resized partition /dev/nvme0n1p9 Sep 4 23:45:43.457204 tar[1861]: linux-arm64/helm Sep 4 23:45:43.462850 extend-filesystems[1885]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:45:43.476718 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 23:45:43.501388 jq[1875]: true Sep 4 23:45:43.577587 update_engine[1848]: I20250904 23:45:43.577325 1848 main.cc:92] Flatcar Update Engine starting Sep 4 23:45:43.578815 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 23:45:43.586867 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1610) Sep 4 23:45:43.595633 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 23:45:43.597414 ntpd[1838]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:39:02 UTC 2025 (1): Starting Sep 4 23:45:43.597478 ntpd[1838]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:45:43.624690 update_engine[1848]: I20250904 23:45:43.602295 1848 update_check_scheduler.cc:74] Next update check in 8m54s Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:39:02 UTC 2025 (1): Starting Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: ---------------------------------------------------- Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: corporation. Support and training for ntp-4 are Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: available at https://www.nwtime.org/support Sep 4 23:45:43.624761 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: ---------------------------------------------------- Sep 4 23:45:43.600219 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:45:43.610768 ntpd[1838]: ---------------------------------------------------- Sep 4 23:45:43.663870 extend-filesystems[1885]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 23:45:43.663870 extend-filesystems[1885]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:45:43.663870 extend-filesystems[1885]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: proto: precision = 0.096 usec (-23) Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: basedate set to 2025-08-23 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: gps base set to 2025-08-24 (week 2381) Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Listen normally on 3 eth0 172.31.23.55:123 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Listen normally on 4 lo [::1]:123 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: bind(21) AF_INET6 fe80::441:a2ff:fe8c:767d%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: unable to create socket on eth0 (5) for fe80::441:a2ff:fe8c:767d%2#123 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: failed to init interface for address fe80::441:a2ff:fe8c:767d%2 Sep 4 23:45:43.701875 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: Listening on routing socket on fd #21 for interface updates Sep 4 23:45:43.630074 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:45:43.610861 ntpd[1838]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:45:43.702781 extend-filesystems[1836]: Resized filesystem in /dev/nvme0n1p9 Sep 4 23:45:43.634902 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:45:43.610889 ntpd[1838]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:45:43.636684 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:45:43.610910 ntpd[1838]: corporation. Support and training for ntp-4 are Sep 4 23:45:43.677322 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:45:43.610929 ntpd[1838]: available at https://www.nwtime.org/support Sep 4 23:45:43.610947 ntpd[1838]: ---------------------------------------------------- Sep 4 23:45:43.625103 ntpd[1838]: proto: precision = 0.096 usec (-23) Sep 4 23:45:43.634061 ntpd[1838]: basedate set to 2025-08-23 Sep 4 23:45:43.634097 ntpd[1838]: gps base set to 2025-08-24 (week 2381) Sep 4 23:45:43.650430 ntpd[1838]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:45:43.650510 ntpd[1838]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:45:43.661843 ntpd[1838]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:45:43.661915 ntpd[1838]: Listen normally on 3 eth0 172.31.23.55:123 Sep 4 23:45:43.661986 ntpd[1838]: Listen normally on 4 lo [::1]:123 Sep 4 23:45:43.662062 ntpd[1838]: bind(21) AF_INET6 fe80::441:a2ff:fe8c:767d%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:45:43.662101 ntpd[1838]: unable to create socket on eth0 (5) for fe80::441:a2ff:fe8c:767d%2#123 Sep 4 23:45:43.662128 ntpd[1838]: failed to init interface for address fe80::441:a2ff:fe8c:767d%2 Sep 4 23:45:43.662178 ntpd[1838]: Listening on routing socket on fd #21 for interface updates Sep 4 23:45:43.737880 ntpd[1838]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:43.740883 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:43.740883 ntpd[1838]: 4 Sep 23:45:43 ntpd[1838]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:43.737952 ntpd[1838]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:45:43.826079 coreos-metadata[1833]: Sep 04 23:45:43.824 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:45:43.846692 coreos-metadata[1833]: Sep 04 23:45:43.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 23:45:43.847574 coreos-metadata[1833]: Sep 04 23:45:43.847 INFO Fetch successful Sep 4 23:45:43.847703 coreos-metadata[1833]: Sep 04 23:45:43.847 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 23:45:43.849710 coreos-metadata[1833]: Sep 04 23:45:43.849 INFO Fetch successful Sep 4 23:45:43.849710 coreos-metadata[1833]: Sep 04 23:45:43.849 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 23:45:43.858772 coreos-metadata[1833]: Sep 04 23:45:43.858 INFO Fetch successful Sep 4 23:45:43.858772 coreos-metadata[1833]: Sep 04 23:45:43.858 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 23:45:43.862744 coreos-metadata[1833]: Sep 04 23:45:43.862 INFO Fetch successful Sep 4 23:45:43.862744 coreos-metadata[1833]: Sep 04 23:45:43.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 23:45:43.866696 coreos-metadata[1833]: Sep 04 23:45:43.866 INFO Fetch failed with 404: resource not found Sep 4 23:45:43.866696 coreos-metadata[1833]: Sep 04 23:45:43.866 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 23:45:43.870406 coreos-metadata[1833]: Sep 04 23:45:43.870 INFO Fetch successful Sep 4 23:45:43.870406 coreos-metadata[1833]: Sep 04 23:45:43.870 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 23:45:43.873953 coreos-metadata[1833]: Sep 04 23:45:43.873 INFO Fetch successful Sep 4 23:45:43.873953 coreos-metadata[1833]: Sep 04 23:45:43.873 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 23:45:43.882681 coreos-metadata[1833]: Sep 04 23:45:43.882 INFO Fetch successful Sep 4 23:45:43.882681 coreos-metadata[1833]: Sep 04 23:45:43.882 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 23:45:43.883477 coreos-metadata[1833]: Sep 04 23:45:43.883 INFO Fetch successful Sep 4 23:45:43.883477 coreos-metadata[1833]: Sep 04 23:45:43.883 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 23:45:43.888702 coreos-metadata[1833]: Sep 04 23:45:43.888 INFO Fetch successful Sep 4 23:45:43.914416 bash[1955]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:43.928224 locksmithd[1898]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:45:43.943403 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:45:43.968873 systemd[1]: Starting sshkeys.service... Sep 4 23:45:44.004501 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:45:44.007546 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:45:44.024261 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:45:44.098504 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:45:44.127778 systemd-logind[1847]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 23:45:44.127823 systemd-logind[1847]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 23:45:44.133111 systemd-logind[1847]: New seat seat0. Sep 4 23:45:44.143379 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:45:44.208087 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 23:45:44.212897 dbus-daemon[1834]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 23:45:44.217111 dbus-daemon[1834]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1868 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 23:45:44.265747 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 23:45:44.381804 systemd-networkd[1776]: eth0: Gained IPv6LL Sep 4 23:45:44.399220 polkitd[2011]: Started polkitd version 121 Sep 4 23:45:44.400377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:45:44.406156 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:45:44.414778 containerd[1864]: time="2025-09-04T23:45:44.414051059Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:45:44.429272 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 23:45:44.439098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:44.446106 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:45:44.510618 containerd[1864]: time="2025-09-04T23:45:44.507988955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512519 containerd[1864]: time="2025-09-04T23:45:44.511301831Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512519 containerd[1864]: time="2025-09-04T23:45:44.511377611Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:45:44.512519 containerd[1864]: time="2025-09-04T23:45:44.511413659Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:45:44.512519 containerd[1864]: time="2025-09-04T23:45:44.512185499Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:45:44.512519 containerd[1864]: time="2025-09-04T23:45:44.512232563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512519 containerd[1864]: time="2025-09-04T23:45:44.512358155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512519 containerd[1864]: time="2025-09-04T23:45:44.512385419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512905 containerd[1864]: time="2025-09-04T23:45:44.512786855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512905 containerd[1864]: time="2025-09-04T23:45:44.512820227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512905 containerd[1864]: time="2025-09-04T23:45:44.512851595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:44.512905 containerd[1864]: time="2025-09-04T23:45:44.512874815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:44.513078 containerd[1864]: time="2025-09-04T23:45:44.513039947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:44.517732 containerd[1864]: time="2025-09-04T23:45:44.513430499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:45:44.517732 containerd[1864]: time="2025-09-04T23:45:44.513748607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:45:44.517732 containerd[1864]: time="2025-09-04T23:45:44.513865283Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:45:44.517732 containerd[1864]: time="2025-09-04T23:45:44.514232279Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:45:44.517732 containerd[1864]: time="2025-09-04T23:45:44.514337111Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:45:44.524266 containerd[1864]: time="2025-09-04T23:45:44.522956315Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:45:44.524266 containerd[1864]: time="2025-09-04T23:45:44.523083659Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:45:44.525886 containerd[1864]: time="2025-09-04T23:45:44.524656559Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:45:44.525886 containerd[1864]: time="2025-09-04T23:45:44.524763887Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:45:44.525886 containerd[1864]: time="2025-09-04T23:45:44.524803811Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:45:44.525886 containerd[1864]: time="2025-09-04T23:45:44.525085007Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:45:44.526617 containerd[1864]: time="2025-09-04T23:45:44.526257719Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:45:44.526617 containerd[1864]: time="2025-09-04T23:45:44.526529591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:45:44.526617 containerd[1864]: time="2025-09-04T23:45:44.526567163Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.528803099Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.528877031Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.528911135Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.528941663Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.528973799Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529007099Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529038671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529070963Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529101779Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529145879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529177235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529206659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529244231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.535707 containerd[1864]: time="2025-09-04T23:45:44.529276247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529308395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529336691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529366439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529401503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529435235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529465283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529502759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529531967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.529563695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.531709751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.531775235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.531805163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.534043379Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:45:44.541056 containerd[1864]: time="2025-09-04T23:45:44.534228071Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:45:44.554083 containerd[1864]: time="2025-09-04T23:45:44.534254135Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:45:44.554083 containerd[1864]: time="2025-09-04T23:45:44.534300947Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:45:44.554083 containerd[1864]: time="2025-09-04T23:45:44.534329015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.554083 containerd[1864]: time="2025-09-04T23:45:44.534360491Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:45:44.554083 containerd[1864]: time="2025-09-04T23:45:44.534384263Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:45:44.554083 containerd[1864]: time="2025-09-04T23:45:44.534413303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:45:44.541692 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.535155011Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.535250435Z" level=info msg="Connect containerd service" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.535325435Z" level=info msg="using legacy CRI server" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.535343819Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.536451215Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.540132599Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.540446579Z" level=info msg="Start subscribing containerd event" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.540513467Z" level=info msg="Start recovering state" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.551922515Z" level=info msg="Start event monitor" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.552176051Z" level=info msg="Start snapshots syncer" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.552466451Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:45:44.554642 containerd[1864]: time="2025-09-04T23:45:44.552695111Z" level=info msg="Start streaming server" Sep 4 23:45:44.555324 containerd[1864]: time="2025-09-04T23:45:44.554922599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:45:44.567663 containerd[1864]: time="2025-09-04T23:45:44.555753647Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:45:44.557116 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:45:44.573084 containerd[1864]: time="2025-09-04T23:45:44.569018747Z" level=info msg="containerd successfully booted in 0.163018s" Sep 4 23:45:44.597392 polkitd[2011]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 23:45:44.597508 polkitd[2011]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 23:45:44.602030 polkitd[2011]: Finished loading, compiling and executing 2 rules Sep 4 23:45:44.606539 dbus-daemon[1834]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 23:45:44.612750 coreos-metadata[1994]: Sep 04 23:45:44.608 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:45:44.608841 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 23:45:44.614355 polkitd[2011]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 23:45:44.617785 coreos-metadata[1994]: Sep 04 23:45:44.617 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 23:45:44.620171 coreos-metadata[1994]: Sep 04 23:45:44.619 INFO Fetch successful Sep 4 23:45:44.620171 coreos-metadata[1994]: Sep 04 23:45:44.619 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 23:45:44.623704 coreos-metadata[1994]: Sep 04 23:45:44.622 INFO Fetch successful Sep 4 23:45:44.636860 unknown[1994]: wrote ssh authorized keys file for user: core Sep 4 23:45:44.703777 systemd-hostnamed[1868]: Hostname set to (transient) Sep 4 23:45:44.703957 systemd-resolved[1777]: System hostname changed to 'ip-172-31-23-55'. Sep 4 23:45:44.708858 update-ssh-keys[2052]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:45:44.710685 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:45:44.720743 systemd[1]: Finished sshkeys.service. Sep 4 23:45:44.736012 amazon-ssm-agent[2016]: Initializing new seelog logger Sep 4 23:45:44.736012 amazon-ssm-agent[2016]: New Seelog Logger Creation Complete Sep 4 23:45:44.737140 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.737140 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.737140 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 processing appconfig overrides Sep 4 23:45:44.737417 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.737417 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.737417 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 processing appconfig overrides Sep 4 23:45:44.739428 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.739428 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.739428 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 processing appconfig overrides Sep 4 23:45:44.739428 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO Proxy environment variables: Sep 4 23:45:44.742019 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.742019 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:45:44.742019 amazon-ssm-agent[2016]: 2025/09/04 23:45:44 processing appconfig overrides Sep 4 23:45:44.841703 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO http_proxy: Sep 4 23:45:44.942421 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO no_proxy: Sep 4 23:45:45.043388 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO https_proxy: Sep 4 23:45:45.141890 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO Checking if agent identity type OnPrem can be assumed Sep 4 23:45:45.241255 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO Checking if agent identity type EC2 can be assumed Sep 4 23:45:45.341083 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO Agent will take identity from EC2 Sep 4 23:45:45.440000 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:45:45.505531 tar[1861]: linux-arm64/LICENSE Sep 4 23:45:45.506054 tar[1861]: linux-arm64/README.md Sep 4 23:45:45.542613 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:45:45.549795 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:45:45.640398 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:45:45.739623 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 23:45:45.839499 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 23:45:45.940001 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 23:45:46.042063 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 23:45:46.141320 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [Registrar] Starting registrar module Sep 4 23:45:46.209005 sshd_keygen[1886]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:45:46.241427 amazon-ssm-agent[2016]: 2025-09-04 23:45:44 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 23:45:46.271802 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:45:46.285055 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:45:46.293320 amazon-ssm-agent[2016]: 2025-09-04 23:45:46 INFO [EC2Identity] EC2 registration was successful. Sep 4 23:45:46.293320 amazon-ssm-agent[2016]: 2025-09-04 23:45:46 INFO [CredentialRefresher] credentialRefresher has started Sep 4 23:45:46.293493 amazon-ssm-agent[2016]: 2025-09-04 23:45:46 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 23:45:46.293493 amazon-ssm-agent[2016]: 2025-09-04 23:45:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 23:45:46.294391 systemd[1]: Started sshd@0-172.31.23.55:22-139.178.89.65:53672.service - OpenSSH per-connection server daemon (139.178.89.65:53672). Sep 4 23:45:46.312936 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:45:46.313496 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:45:46.328198 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:45:46.342768 amazon-ssm-agent[2016]: 2025-09-04 23:45:46 INFO [CredentialRefresher] Next credential rotation will be in 31.208307305133335 minutes Sep 4 23:45:46.374442 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:45:46.389419 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:45:46.397855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:45:46.400937 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:45:46.543884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:46.548478 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:45:46.557734 systemd[1]: Startup finished in 1.101s (kernel) + 8.912s (initrd) + 8.901s (userspace) = 18.916s. Sep 4 23:45:46.560204 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:46.591352 sshd[2073]: Accepted publickey for core from 139.178.89.65 port 53672 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:46.596651 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:46.612142 ntpd[1838]: Listen normally on 6 eth0 [fe80::441:a2ff:fe8c:767d%2]:123 Sep 4 23:45:46.613239 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:45:46.616765 ntpd[1838]: 4 Sep 23:45:46 ntpd[1838]: Listen normally on 6 eth0 [fe80::441:a2ff:fe8c:767d%2]:123 Sep 4 23:45:46.620195 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:45:46.639705 systemd-logind[1847]: New session 1 of user core. Sep 4 23:45:46.664341 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:45:46.675154 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:45:46.697534 (systemd)[2094]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:45:46.703218 systemd-logind[1847]: New session c1 of user core. Sep 4 23:45:47.006572 systemd[2094]: Queued start job for default target default.target. Sep 4 23:45:47.017749 systemd[2094]: Created slice app.slice - User Application Slice. Sep 4 23:45:47.017811 systemd[2094]: Reached target paths.target - Paths. Sep 4 23:45:47.018009 systemd[2094]: Reached target timers.target - Timers. Sep 4 23:45:47.020550 systemd[2094]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:45:47.063444 systemd[2094]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:45:47.063880 systemd[2094]: Reached target sockets.target - Sockets. Sep 4 23:45:47.063972 systemd[2094]: Reached target basic.target - Basic System. Sep 4 23:45:47.064056 systemd[2094]: Reached target default.target - Main User Target. Sep 4 23:45:47.064114 systemd[2094]: Startup finished in 346ms. Sep 4 23:45:47.064967 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:45:47.075925 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:45:47.242769 systemd[1]: Started sshd@1-172.31.23.55:22-139.178.89.65:53676.service - OpenSSH per-connection server daemon (139.178.89.65:53676). Sep 4 23:45:47.323639 amazon-ssm-agent[2016]: 2025-09-04 23:45:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 23:45:47.424582 amazon-ssm-agent[2016]: 2025-09-04 23:45:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2113) started Sep 4 23:45:47.443403 sshd[2110]: Accepted publickey for core from 139.178.89.65 port 53676 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:47.447337 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:47.466641 systemd-logind[1847]: New session 2 of user core. Sep 4 23:45:47.475238 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:45:47.580058 amazon-ssm-agent[2016]: 2025-09-04 23:45:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 23:45:47.642125 sshd[2119]: Connection closed by 139.178.89.65 port 53676 Sep 4 23:45:47.643039 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:47.650721 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:45:47.656027 systemd-logind[1847]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:45:47.656878 systemd[1]: sshd@1-172.31.23.55:22-139.178.89.65:53676.service: Deactivated successfully. Sep 4 23:45:47.664991 systemd-logind[1847]: Removed session 2. Sep 4 23:45:47.683085 kubelet[2087]: E0904 23:45:47.682997 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:47.690248 systemd[1]: Started sshd@2-172.31.23.55:22-139.178.89.65:53686.service - OpenSSH per-connection server daemon (139.178.89.65:53686). Sep 4 23:45:47.692183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:47.692530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:47.693083 systemd[1]: kubelet.service: Consumed 1.506s CPU time, 255.6M memory peak. Sep 4 23:45:47.874632 sshd[2128]: Accepted publickey for core from 139.178.89.65 port 53686 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:47.876320 sshd-session[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:47.885313 systemd-logind[1847]: New session 3 of user core. Sep 4 23:45:47.892835 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:45:48.013622 sshd[2131]: Connection closed by 139.178.89.65 port 53686 Sep 4 23:45:48.014429 sshd-session[2128]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:48.019558 systemd-logind[1847]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:45:48.022135 systemd[1]: sshd@2-172.31.23.55:22-139.178.89.65:53686.service: Deactivated successfully. Sep 4 23:45:48.025315 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:45:48.028368 systemd-logind[1847]: Removed session 3. Sep 4 23:45:48.059071 systemd[1]: Started sshd@3-172.31.23.55:22-139.178.89.65:53688.service - OpenSSH per-connection server daemon (139.178.89.65:53688). Sep 4 23:45:48.237854 sshd[2137]: Accepted publickey for core from 139.178.89.65 port 53688 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:48.240677 sshd-session[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:48.249549 systemd-logind[1847]: New session 4 of user core. Sep 4 23:45:48.255871 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:45:48.382246 sshd[2139]: Connection closed by 139.178.89.65 port 53688 Sep 4 23:45:48.381524 sshd-session[2137]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:48.387294 systemd[1]: sshd@3-172.31.23.55:22-139.178.89.65:53688.service: Deactivated successfully. Sep 4 23:45:48.387907 systemd-logind[1847]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:45:48.390189 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:45:48.394076 systemd-logind[1847]: Removed session 4. Sep 4 23:45:48.424305 systemd[1]: Started sshd@4-172.31.23.55:22-139.178.89.65:53696.service - OpenSSH per-connection server daemon (139.178.89.65:53696). Sep 4 23:45:48.598700 sshd[2145]: Accepted publickey for core from 139.178.89.65 port 53696 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:48.603046 sshd-session[2145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:48.615989 systemd-logind[1847]: New session 5 of user core. Sep 4 23:45:48.622878 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:45:48.761714 sudo[2148]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:45:48.762346 sudo[2148]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:48.777307 sudo[2148]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:48.800661 sshd[2147]: Connection closed by 139.178.89.65 port 53696 Sep 4 23:45:48.801718 sshd-session[2145]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:48.807851 systemd-logind[1847]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:45:48.808960 systemd[1]: sshd@4-172.31.23.55:22-139.178.89.65:53696.service: Deactivated successfully. Sep 4 23:45:48.812205 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:45:48.816863 systemd-logind[1847]: Removed session 5. Sep 4 23:45:48.847115 systemd[1]: Started sshd@5-172.31.23.55:22-139.178.89.65:53702.service - OpenSSH per-connection server daemon (139.178.89.65:53702). Sep 4 23:45:49.029211 sshd[2154]: Accepted publickey for core from 139.178.89.65 port 53702 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:49.031736 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:49.039662 systemd-logind[1847]: New session 6 of user core. Sep 4 23:45:49.048868 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:45:49.152431 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:45:49.153082 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:49.159028 sudo[2158]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:49.169649 sudo[2157]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:45:49.170272 sudo[2157]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:49.194175 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:45:49.240443 augenrules[2180]: No rules Sep 4 23:45:49.242735 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:45:49.243177 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:45:49.246029 sudo[2157]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:49.269579 sshd[2156]: Connection closed by 139.178.89.65 port 53702 Sep 4 23:45:49.269392 sshd-session[2154]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:49.274675 systemd[1]: sshd@5-172.31.23.55:22-139.178.89.65:53702.service: Deactivated successfully. Sep 4 23:45:49.278043 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:45:49.281851 systemd-logind[1847]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:45:49.284307 systemd-logind[1847]: Removed session 6. Sep 4 23:45:49.311121 systemd[1]: Started sshd@6-172.31.23.55:22-139.178.89.65:53714.service - OpenSSH per-connection server daemon (139.178.89.65:53714). Sep 4 23:45:49.502800 sshd[2189]: Accepted publickey for core from 139.178.89.65 port 53714 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:45:49.505225 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:45:49.513968 systemd-logind[1847]: New session 7 of user core. Sep 4 23:45:49.520858 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:45:49.624673 sudo[2192]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:45:49.625330 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:45:50.448091 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:45:50.452008 (dockerd)[2210]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:45:51.034133 systemd-resolved[1777]: Clock change detected. Flushing caches. Sep 4 23:45:51.397571 dockerd[2210]: time="2025-09-04T23:45:51.396474665Z" level=info msg="Starting up" Sep 4 23:45:51.588046 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2988425946-merged.mount: Deactivated successfully. Sep 4 23:45:51.684378 dockerd[2210]: time="2025-09-04T23:45:51.684219811Z" level=info msg="Loading containers: start." Sep 4 23:45:51.976558 kernel: Initializing XFRM netlink socket Sep 4 23:45:52.009013 (udev-worker)[2236]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:45:52.098057 systemd-networkd[1776]: docker0: Link UP Sep 4 23:45:52.139055 dockerd[2210]: time="2025-09-04T23:45:52.139003817Z" level=info msg="Loading containers: done." Sep 4 23:45:52.171563 dockerd[2210]: time="2025-09-04T23:45:52.170473589Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:45:52.171563 dockerd[2210]: time="2025-09-04T23:45:52.170687825Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:45:52.171563 dockerd[2210]: time="2025-09-04T23:45:52.171048197Z" level=info msg="Daemon has completed initialization" Sep 4 23:45:52.237367 dockerd[2210]: time="2025-09-04T23:45:52.237275813Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:45:52.237832 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:45:52.581063 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck26107807-merged.mount: Deactivated successfully. Sep 4 23:45:53.307778 containerd[1864]: time="2025-09-04T23:45:53.307726291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 4 23:45:53.960931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264578261.mount: Deactivated successfully. Sep 4 23:45:55.359367 containerd[1864]: time="2025-09-04T23:45:55.357331365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:55.360886 containerd[1864]: time="2025-09-04T23:45:55.360383577Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652441" Sep 4 23:45:55.364532 containerd[1864]: time="2025-09-04T23:45:55.362968365Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:55.371311 containerd[1864]: time="2025-09-04T23:45:55.371252757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:55.373625 containerd[1864]: time="2025-09-04T23:45:55.372895737Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 2.065105138s" Sep 4 23:45:55.373852 containerd[1864]: time="2025-09-04T23:45:55.373817265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 4 23:45:55.376444 containerd[1864]: time="2025-09-04T23:45:55.376400601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 4 23:45:56.736598 containerd[1864]: time="2025-09-04T23:45:56.736541064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:56.739862 containerd[1864]: time="2025-09-04T23:45:56.739800624Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460309" Sep 4 23:45:56.740797 containerd[1864]: time="2025-09-04T23:45:56.740740440Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:56.745945 containerd[1864]: time="2025-09-04T23:45:56.745877388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:56.748471 containerd[1864]: time="2025-09-04T23:45:56.748416600Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.371761995s" Sep 4 23:45:56.748693 containerd[1864]: time="2025-09-04T23:45:56.748659696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 4 23:45:56.750318 containerd[1864]: time="2025-09-04T23:45:56.750024168Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 4 23:45:57.909548 containerd[1864]: time="2025-09-04T23:45:57.907930022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:57.911235 containerd[1864]: time="2025-09-04T23:45:57.911171966Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125903" Sep 4 23:45:57.912748 containerd[1864]: time="2025-09-04T23:45:57.912674474Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:57.917874 containerd[1864]: time="2025-09-04T23:45:57.917822798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:57.920110 containerd[1864]: time="2025-09-04T23:45:57.920048978Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.169963598s" Sep 4 23:45:57.920244 containerd[1864]: time="2025-09-04T23:45:57.920107034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 4 23:45:57.921760 containerd[1864]: time="2025-09-04T23:45:57.921716198Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 4 23:45:58.363075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:45:58.373423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:58.818881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:58.825845 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:58.935123 kubelet[2474]: E0904 23:45:58.934924 2474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:58.945078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:58.947700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:58.948728 systemd[1]: kubelet.service: Consumed 341ms CPU time, 106.9M memory peak. Sep 4 23:45:59.390976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930923179.mount: Deactivated successfully. Sep 4 23:45:59.959079 containerd[1864]: time="2025-09-04T23:45:59.958989184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:59.960976 containerd[1864]: time="2025-09-04T23:45:59.960891580Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916095" Sep 4 23:45:59.963690 containerd[1864]: time="2025-09-04T23:45:59.963615484Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:59.968229 containerd[1864]: time="2025-09-04T23:45:59.968160976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:59.970158 containerd[1864]: time="2025-09-04T23:45:59.969703432Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 2.047783498s" Sep 4 23:45:59.970158 containerd[1864]: time="2025-09-04T23:45:59.969752380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 4 23:45:59.970888 containerd[1864]: time="2025-09-04T23:45:59.970617928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:46:00.515855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985982024.mount: Deactivated successfully. Sep 4 23:46:01.719671 containerd[1864]: time="2025-09-04T23:46:01.719613629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.722425 containerd[1864]: time="2025-09-04T23:46:01.722340497Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 4 23:46:01.724324 containerd[1864]: time="2025-09-04T23:46:01.724252481Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.731550 containerd[1864]: time="2025-09-04T23:46:01.730680953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:01.739381 containerd[1864]: time="2025-09-04T23:46:01.739304909Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.768629117s" Sep 4 23:46:01.739381 containerd[1864]: time="2025-09-04T23:46:01.739374113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 4 23:46:01.740787 containerd[1864]: time="2025-09-04T23:46:01.740747813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:46:02.218374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816058520.mount: Deactivated successfully. Sep 4 23:46:02.231915 containerd[1864]: time="2025-09-04T23:46:02.231834135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:02.234077 containerd[1864]: time="2025-09-04T23:46:02.233660967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 4 23:46:02.237546 containerd[1864]: time="2025-09-04T23:46:02.236214867Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:02.242401 containerd[1864]: time="2025-09-04T23:46:02.242352759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:02.243686 containerd[1864]: time="2025-09-04T23:46:02.243627063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 502.712738ms" Sep 4 23:46:02.243686 containerd[1864]: time="2025-09-04T23:46:02.243681159Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:46:02.244566 containerd[1864]: time="2025-09-04T23:46:02.244488387Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 4 23:46:02.808342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888833996.mount: Deactivated successfully. Sep 4 23:46:04.829857 containerd[1864]: time="2025-09-04T23:46:04.829734344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.833111 containerd[1864]: time="2025-09-04T23:46:04.832585700Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 4 23:46:04.834798 containerd[1864]: time="2025-09-04T23:46:04.834635912Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.844457 containerd[1864]: time="2025-09-04T23:46:04.843705080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.847772 containerd[1864]: time="2025-09-04T23:46:04.847697420Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.603121985s" Sep 4 23:46:04.847772 containerd[1864]: time="2025-09-04T23:46:04.847763012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 4 23:46:08.959031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:46:08.969990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:09.332971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:09.337235 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:46:09.441566 kubelet[2623]: E0904 23:46:09.439780 2623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:46:09.445856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:46:09.446180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:46:09.446756 systemd[1]: kubelet.service: Consumed 291ms CPU time, 107.5M memory peak. Sep 4 23:46:11.823793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:11.824833 systemd[1]: kubelet.service: Consumed 291ms CPU time, 107.5M memory peak. Sep 4 23:46:11.834034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:11.894833 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-7.scope)... Sep 4 23:46:11.894870 systemd[1]: Reloading... Sep 4 23:46:12.191579 zram_generator::config[2690]: No configuration found. Sep 4 23:46:12.407248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:12.641191 systemd[1]: Reloading finished in 745 ms. Sep 4 23:46:12.734690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:12.744606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:12.746836 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:46:12.748760 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:12.748933 systemd[1]: kubelet.service: Consumed 229ms CPU time, 94.9M memory peak. Sep 4 23:46:12.760040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:13.079794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:13.090675 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:13.169193 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:13.171537 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 23:46:13.171537 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:13.171537 kubelet[2749]: I0904 23:46:13.169999 2749 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:46:14.901696 kubelet[2749]: I0904 23:46:14.901624 2749 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 23:46:14.901696 kubelet[2749]: I0904 23:46:14.901679 2749 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:46:14.902311 kubelet[2749]: I0904 23:46:14.902147 2749 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 23:46:14.939597 kubelet[2749]: E0904 23:46:14.939497 2749 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:14.941144 kubelet[2749]: I0904 23:46:14.940894 2749 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:46:14.959117 kubelet[2749]: E0904 23:46:14.959050 2749 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:46:14.959243 kubelet[2749]: I0904 23:46:14.959123 2749 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:46:14.966759 kubelet[2749]: I0904 23:46:14.966710 2749 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:46:14.968588 kubelet[2749]: I0904 23:46:14.968531 2749 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 23:46:14.968935 kubelet[2749]: I0904 23:46:14.968872 2749 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:46:14.969226 kubelet[2749]: I0904 23:46:14.968924 2749 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:46:14.969397 kubelet[2749]: I0904 23:46:14.969351 2749 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:46:14.969397 kubelet[2749]: I0904 23:46:14.969373 2749 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 23:46:14.969870 kubelet[2749]: I0904 23:46:14.969827 2749 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:14.975346 kubelet[2749]: I0904 23:46:14.975075 2749 kubelet.go:408] "Attempting to sync node with API server" Sep 4 23:46:14.975346 kubelet[2749]: I0904 23:46:14.975131 2749 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:46:14.975346 kubelet[2749]: I0904 23:46:14.975168 2749 kubelet.go:314] "Adding apiserver pod source" Sep 4 23:46:14.975346 kubelet[2749]: I0904 23:46:14.975216 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:46:14.980543 kubelet[2749]: W0904 23:46:14.978947 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-55&limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:14.980543 kubelet[2749]: E0904 23:46:14.979057 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-55&limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:14.982330 kubelet[2749]: I0904 23:46:14.982277 2749 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:46:14.983546 kubelet[2749]: I0904 23:46:14.983475 2749 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:46:14.983886 kubelet[2749]: W0904 23:46:14.983850 2749 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:46:14.985793 kubelet[2749]: I0904 23:46:14.985746 2749 server.go:1274] "Started kubelet" Sep 4 23:46:14.986033 kubelet[2749]: W0904 23:46:14.985962 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:14.986103 kubelet[2749]: E0904 23:46:14.986064 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:14.992706 kubelet[2749]: I0904 23:46:14.992643 2749 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:46:14.993705 kubelet[2749]: I0904 23:46:14.993611 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:46:14.994324 kubelet[2749]: I0904 23:46:14.994168 2749 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:46:14.994945 kubelet[2749]: I0904 23:46:14.994917 2749 server.go:449] "Adding debug handlers to kubelet server" Sep 4 23:46:14.996921 kubelet[2749]: E0904 23:46:14.994714 2749 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.55:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-55.18623916dce0faee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-55,UID:ip-172-31-23-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-55,},FirstTimestamp:2025-09-04 23:46:14.985710318 +0000 UTC m=+1.886389690,LastTimestamp:2025-09-04 23:46:14.985710318 +0000 UTC m=+1.886389690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-55,}" Sep 4 23:46:15.000673 kubelet[2749]: I0904 23:46:15.000637 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:46:15.003382 kubelet[2749]: E0904 23:46:15.003325 2749 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:46:15.003865 kubelet[2749]: I0904 23:46:15.003761 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:46:15.009118 kubelet[2749]: E0904 23:46:15.009067 2749 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-55\" not found" Sep 4 23:46:15.009284 kubelet[2749]: I0904 23:46:15.009131 2749 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 23:46:15.009492 kubelet[2749]: I0904 23:46:15.009464 2749 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 23:46:15.010202 kubelet[2749]: I0904 23:46:15.010041 2749 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:46:15.012027 kubelet[2749]: W0904 23:46:15.011289 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:15.012027 kubelet[2749]: E0904 23:46:15.011392 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:15.012027 kubelet[2749]: I0904 23:46:15.011776 2749 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:46:15.012027 kubelet[2749]: I0904 23:46:15.011908 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:46:15.014667 kubelet[2749]: I0904 23:46:15.014617 2749 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:46:15.046933 kubelet[2749]: I0904 23:46:15.046848 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:46:15.050433 kubelet[2749]: I0904 23:46:15.050380 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:46:15.050433 kubelet[2749]: I0904 23:46:15.050428 2749 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 23:46:15.050851 kubelet[2749]: I0904 23:46:15.050464 2749 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 23:46:15.050851 kubelet[2749]: E0904 23:46:15.050619 2749 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:46:15.062800 kubelet[2749]: E0904 23:46:15.062730 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-55?timeout=10s\": dial tcp 172.31.23.55:6443: connect: connection refused" interval="200ms" Sep 4 23:46:15.066116 kubelet[2749]: W0904 23:46:15.065716 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:15.066116 kubelet[2749]: E0904 23:46:15.065809 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:15.066677 kubelet[2749]: I0904 23:46:15.066621 2749 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 23:46:15.066866 kubelet[2749]: I0904 23:46:15.066654 2749 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 23:46:15.066866 kubelet[2749]: I0904 23:46:15.066812 2749 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:15.070911 kubelet[2749]: I0904 23:46:15.070853 2749 policy_none.go:49] "None policy: Start" Sep 4 23:46:15.073570 kubelet[2749]: I0904 23:46:15.073090 2749 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 23:46:15.073570 kubelet[2749]: I0904 23:46:15.073133 2749 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:46:15.085892 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:46:15.104572 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:46:15.110496 kubelet[2749]: E0904 23:46:15.109857 2749 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-55\" not found" Sep 4 23:46:15.111403 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:46:15.121204 kubelet[2749]: I0904 23:46:15.121143 2749 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:46:15.121467 kubelet[2749]: I0904 23:46:15.121437 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:46:15.121574 kubelet[2749]: I0904 23:46:15.121467 2749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:46:15.123021 kubelet[2749]: I0904 23:46:15.122758 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:46:15.125719 kubelet[2749]: E0904 23:46:15.125677 2749 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-55\" not found" Sep 4 23:46:15.162809 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 23:46:15.186958 systemd[1]: Created slice kubepods-burstable-pod0e83f1f5c2e474415512d8ba1ec48d28.slice - libcontainer container kubepods-burstable-pod0e83f1f5c2e474415512d8ba1ec48d28.slice. Sep 4 23:46:15.217042 systemd[1]: Created slice kubepods-burstable-podefd267465fbdd4fe97d5d28b8718af84.slice - libcontainer container kubepods-burstable-podefd267465fbdd4fe97d5d28b8718af84.slice. Sep 4 23:46:15.224875 kubelet[2749]: I0904 23:46:15.224818 2749 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-55" Sep 4 23:46:15.226604 kubelet[2749]: E0904 23:46:15.226367 2749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.55:6443/api/v1/nodes\": dial tcp 172.31.23.55:6443: connect: connection refused" node="ip-172-31-23-55" Sep 4 23:46:15.231661 systemd[1]: Created slice kubepods-burstable-pode7339a588e0e65e0b2c3ddceaddfd9a9.slice - libcontainer container kubepods-burstable-pode7339a588e0e65e0b2c3ddceaddfd9a9.slice. Sep 4 23:46:15.263702 kubelet[2749]: E0904 23:46:15.263646 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-55?timeout=10s\": dial tcp 172.31.23.55:6443: connect: connection refused" interval="400ms" Sep 4 23:46:15.311086 kubelet[2749]: I0904 23:46:15.311027 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e83f1f5c2e474415512d8ba1ec48d28-ca-certs\") pod \"kube-apiserver-ip-172-31-23-55\" (UID: \"0e83f1f5c2e474415512d8ba1ec48d28\") " pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:15.311209 kubelet[2749]: I0904 23:46:15.311088 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e83f1f5c2e474415512d8ba1ec48d28-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-55\" (UID: \"0e83f1f5c2e474415512d8ba1ec48d28\") " pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:15.311209 kubelet[2749]: I0904 23:46:15.311132 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e83f1f5c2e474415512d8ba1ec48d28-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-55\" (UID: \"0e83f1f5c2e474415512d8ba1ec48d28\") " pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:15.311209 kubelet[2749]: I0904 23:46:15.311173 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:15.311374 kubelet[2749]: I0904 23:46:15.311210 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7339a588e0e65e0b2c3ddceaddfd9a9-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-55\" (UID: \"e7339a588e0e65e0b2c3ddceaddfd9a9\") " pod="kube-system/kube-scheduler-ip-172-31-23-55" Sep 4 23:46:15.311374 kubelet[2749]: I0904 23:46:15.311247 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:15.311374 kubelet[2749]: I0904 23:46:15.311280 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:15.311374 kubelet[2749]: I0904 23:46:15.311313 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:15.311374 kubelet[2749]: I0904 23:46:15.311347 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:15.428691 kubelet[2749]: I0904 23:46:15.428549 2749 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-55" Sep 4 23:46:15.429478 kubelet[2749]: E0904 23:46:15.429399 2749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.55:6443/api/v1/nodes\": dial tcp 172.31.23.55:6443: connect: connection refused" node="ip-172-31-23-55" Sep 4 23:46:15.509704 containerd[1864]: time="2025-09-04T23:46:15.509240069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-55,Uid:0e83f1f5c2e474415512d8ba1ec48d28,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:15.523229 containerd[1864]: time="2025-09-04T23:46:15.522872093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-55,Uid:efd267465fbdd4fe97d5d28b8718af84,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:15.537743 containerd[1864]: time="2025-09-04T23:46:15.537686861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-55,Uid:e7339a588e0e65e0b2c3ddceaddfd9a9,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:15.665582 kubelet[2749]: E0904 23:46:15.665425 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-55?timeout=10s\": dial tcp 172.31.23.55:6443: connect: connection refused" interval="800ms" Sep 4 23:46:15.832484 kubelet[2749]: I0904 23:46:15.832389 2749 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-55" Sep 4 23:46:15.832975 kubelet[2749]: E0904 23:46:15.832906 2749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.55:6443/api/v1/nodes\": dial tcp 172.31.23.55:6443: connect: connection refused" node="ip-172-31-23-55" Sep 4 23:46:16.030324 kubelet[2749]: W0904 23:46:16.030235 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:16.030915 kubelet[2749]: E0904 23:46:16.030333 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:16.034450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897748190.mount: Deactivated successfully. Sep 4 23:46:16.047041 containerd[1864]: time="2025-09-04T23:46:16.046976920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:16.055596 containerd[1864]: time="2025-09-04T23:46:16.055472152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 23:46:16.057468 containerd[1864]: time="2025-09-04T23:46:16.057406084Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:16.060300 containerd[1864]: time="2025-09-04T23:46:16.060057640Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:16.063778 containerd[1864]: time="2025-09-04T23:46:16.063704380Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:16.066037 containerd[1864]: time="2025-09-04T23:46:16.065959300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:46:16.068442 containerd[1864]: time="2025-09-04T23:46:16.067777036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:46:16.070741 containerd[1864]: time="2025-09-04T23:46:16.070672372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:46:16.075428 containerd[1864]: time="2025-09-04T23:46:16.075365704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.015043ms" Sep 4 23:46:16.080570 containerd[1864]: time="2025-09-04T23:46:16.080486572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.507439ms" Sep 4 23:46:16.085610 containerd[1864]: time="2025-09-04T23:46:16.083406652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.360631ms" Sep 4 23:46:16.210542 kubelet[2749]: W0904 23:46:16.210414 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:16.210728 kubelet[2749]: E0904 23:46:16.210546 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:16.280537 containerd[1864]: time="2025-09-04T23:46:16.279959237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:16.283723 containerd[1864]: time="2025-09-04T23:46:16.281747081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:16.283723 containerd[1864]: time="2025-09-04T23:46:16.281807381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:16.283723 containerd[1864]: time="2025-09-04T23:46:16.281960333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:16.290555 containerd[1864]: time="2025-09-04T23:46:16.289762733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:16.290555 containerd[1864]: time="2025-09-04T23:46:16.290019737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:16.290555 containerd[1864]: time="2025-09-04T23:46:16.290061425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:16.292566 containerd[1864]: time="2025-09-04T23:46:16.292421441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:16.292947 containerd[1864]: time="2025-09-04T23:46:16.291241721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:16.293132 containerd[1864]: time="2025-09-04T23:46:16.293017733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:16.293132 containerd[1864]: time="2025-09-04T23:46:16.293059337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:16.294771 containerd[1864]: time="2025-09-04T23:46:16.293350409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:16.336920 kubelet[2749]: W0904 23:46:16.336313 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-55&limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:16.336920 kubelet[2749]: E0904 23:46:16.336414 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-55&limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:16.346162 systemd[1]: Started cri-containerd-5bc3b635a57bba17e5b7a6ae4f665a08b67ee823ae705c339a7a54d04f5be0a8.scope - libcontainer container 5bc3b635a57bba17e5b7a6ae4f665a08b67ee823ae705c339a7a54d04f5be0a8. Sep 4 23:46:16.360187 systemd[1]: Started cri-containerd-ab6c2a8065cee30aab150e081be81242403419aea5b6ec0389d6f61607adb78a.scope - libcontainer container ab6c2a8065cee30aab150e081be81242403419aea5b6ec0389d6f61607adb78a. Sep 4 23:46:16.374911 systemd[1]: Started cri-containerd-d6a2684b2f5aac3e70e890aedaa24160b6f87247d027b1190b509de681704a38.scope - libcontainer container d6a2684b2f5aac3e70e890aedaa24160b6f87247d027b1190b509de681704a38. Sep 4 23:46:16.418870 kubelet[2749]: W0904 23:46:16.418682 2749 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.55:6443: connect: connection refused Sep 4 23:46:16.418870 kubelet[2749]: E0904 23:46:16.418752 2749 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:16.462557 containerd[1864]: time="2025-09-04T23:46:16.461029350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-55,Uid:e7339a588e0e65e0b2c3ddceaddfd9a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab6c2a8065cee30aab150e081be81242403419aea5b6ec0389d6f61607adb78a\"" Sep 4 23:46:16.467904 kubelet[2749]: E0904 23:46:16.467662 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-55?timeout=10s\": dial tcp 172.31.23.55:6443: connect: connection refused" interval="1.6s" Sep 4 23:46:16.487292 containerd[1864]: time="2025-09-04T23:46:16.486755586Z" level=info msg="CreateContainer within sandbox \"ab6c2a8065cee30aab150e081be81242403419aea5b6ec0389d6f61607adb78a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:46:16.491653 containerd[1864]: time="2025-09-04T23:46:16.491581470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-55,Uid:0e83f1f5c2e474415512d8ba1ec48d28,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a2684b2f5aac3e70e890aedaa24160b6f87247d027b1190b509de681704a38\"" Sep 4 23:46:16.509814 containerd[1864]: time="2025-09-04T23:46:16.509759646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-55,Uid:efd267465fbdd4fe97d5d28b8718af84,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bc3b635a57bba17e5b7a6ae4f665a08b67ee823ae705c339a7a54d04f5be0a8\"" Sep 4 23:46:16.518004 containerd[1864]: time="2025-09-04T23:46:16.517946166Z" level=info msg="CreateContainer within sandbox \"d6a2684b2f5aac3e70e890aedaa24160b6f87247d027b1190b509de681704a38\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:46:16.518484 containerd[1864]: time="2025-09-04T23:46:16.518252178Z" level=info msg="CreateContainer within sandbox \"5bc3b635a57bba17e5b7a6ae4f665a08b67ee823ae705c339a7a54d04f5be0a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:46:16.526647 containerd[1864]: time="2025-09-04T23:46:16.525703038Z" level=info msg="CreateContainer within sandbox \"ab6c2a8065cee30aab150e081be81242403419aea5b6ec0389d6f61607adb78a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d\"" Sep 4 23:46:16.527025 containerd[1864]: time="2025-09-04T23:46:16.526856634Z" level=info msg="StartContainer for \"65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d\"" Sep 4 23:46:16.577836 containerd[1864]: time="2025-09-04T23:46:16.576853362Z" level=info msg="CreateContainer within sandbox \"5bc3b635a57bba17e5b7a6ae4f665a08b67ee823ae705c339a7a54d04f5be0a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144\"" Sep 4 23:46:16.581194 containerd[1864]: time="2025-09-04T23:46:16.581143086Z" level=info msg="StartContainer for \"c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144\"" Sep 4 23:46:16.584863 systemd[1]: Started cri-containerd-65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d.scope - libcontainer container 65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d. Sep 4 23:46:16.587225 containerd[1864]: time="2025-09-04T23:46:16.586300782Z" level=info msg="CreateContainer within sandbox \"d6a2684b2f5aac3e70e890aedaa24160b6f87247d027b1190b509de681704a38\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd5d44c87ebdc021582f9d58969918fa3deb3946ae00d064094ec91bdaa6fa69\"" Sep 4 23:46:16.590661 containerd[1864]: time="2025-09-04T23:46:16.588639258Z" level=info msg="StartContainer for \"dd5d44c87ebdc021582f9d58969918fa3deb3946ae00d064094ec91bdaa6fa69\"" Sep 4 23:46:16.636914 kubelet[2749]: I0904 23:46:16.636842 2749 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-55" Sep 4 23:46:16.637378 kubelet[2749]: E0904 23:46:16.637337 2749 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.55:6443/api/v1/nodes\": dial tcp 172.31.23.55:6443: connect: connection refused" node="ip-172-31-23-55" Sep 4 23:46:16.658820 systemd[1]: Started cri-containerd-c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144.scope - libcontainer container c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144. Sep 4 23:46:16.707348 systemd[1]: Started cri-containerd-dd5d44c87ebdc021582f9d58969918fa3deb3946ae00d064094ec91bdaa6fa69.scope - libcontainer container dd5d44c87ebdc021582f9d58969918fa3deb3946ae00d064094ec91bdaa6fa69. Sep 4 23:46:16.727957 containerd[1864]: time="2025-09-04T23:46:16.727748131Z" level=info msg="StartContainer for \"65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d\" returns successfully" Sep 4 23:46:16.805997 containerd[1864]: time="2025-09-04T23:46:16.805440344Z" level=info msg="StartContainer for \"c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144\" returns successfully" Sep 4 23:46:16.844376 containerd[1864]: time="2025-09-04T23:46:16.842840552Z" level=info msg="StartContainer for \"dd5d44c87ebdc021582f9d58969918fa3deb3946ae00d064094ec91bdaa6fa69\" returns successfully" Sep 4 23:46:17.034586 kubelet[2749]: E0904 23:46:17.033376 2749 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.55:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:46:18.242215 kubelet[2749]: I0904 23:46:18.241312 2749 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-55" Sep 4 23:46:20.322970 kubelet[2749]: E0904 23:46:20.322903 2749 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-55\" not found" node="ip-172-31-23-55" Sep 4 23:46:20.418526 kubelet[2749]: E0904 23:46:20.417860 2749 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-55.18623916dce0faee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-55,UID:ip-172-31-23-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-55,},FirstTimestamp:2025-09-04 23:46:14.985710318 +0000 UTC m=+1.886389690,LastTimestamp:2025-09-04 23:46:14.985710318 +0000 UTC m=+1.886389690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-55,}" Sep 4 23:46:20.432333 kubelet[2749]: I0904 23:46:20.431943 2749 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-55" Sep 4 23:46:20.432333 kubelet[2749]: E0904 23:46:20.431997 2749 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-55\": node \"ip-172-31-23-55\" not found" Sep 4 23:46:20.517394 kubelet[2749]: E0904 23:46:20.517254 2749 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-55.18623916dded69f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-55,UID:ip-172-31-23-55,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-23-55,},FirstTimestamp:2025-09-04 23:46:15.003302391 +0000 UTC m=+1.903981799,LastTimestamp:2025-09-04 23:46:15.003302391 +0000 UTC m=+1.903981799,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-55,}" Sep 4 23:46:20.982884 kubelet[2749]: I0904 23:46:20.982835 2749 apiserver.go:52] "Watching apiserver" Sep 4 23:46:21.010080 kubelet[2749]: I0904 23:46:21.009991 2749 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 23:46:22.573557 systemd[1]: Reload requested from client PID 3024 ('systemctl') (unit session-7.scope)... Sep 4 23:46:22.574047 systemd[1]: Reloading... Sep 4 23:46:22.779574 zram_generator::config[3078]: No configuration found. Sep 4 23:46:23.000946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:46:23.267899 systemd[1]: Reloading finished in 693 ms. Sep 4 23:46:23.319641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:23.338687 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:46:23.339185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:23.339281 systemd[1]: kubelet.service: Consumed 2.634s CPU time, 129.8M memory peak. Sep 4 23:46:23.348254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:46:23.725992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:46:23.743250 (kubelet)[3129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:46:23.837973 kubelet[3129]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:23.838417 kubelet[3129]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 23:46:23.838417 kubelet[3129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:46:23.838417 kubelet[3129]: I0904 23:46:23.838128 3129 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:46:23.852707 kubelet[3129]: I0904 23:46:23.851484 3129 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 4 23:46:23.852707 kubelet[3129]: I0904 23:46:23.851589 3129 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:46:23.852707 kubelet[3129]: I0904 23:46:23.852072 3129 server.go:934] "Client rotation is on, will bootstrap in background" Sep 4 23:46:23.855359 kubelet[3129]: I0904 23:46:23.855322 3129 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:46:23.859935 kubelet[3129]: I0904 23:46:23.859873 3129 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:46:23.868184 kubelet[3129]: E0904 23:46:23.868116 3129 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:46:23.868623 kubelet[3129]: I0904 23:46:23.868594 3129 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:46:23.874952 kubelet[3129]: I0904 23:46:23.874904 3129 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:46:23.875566 kubelet[3129]: I0904 23:46:23.875326 3129 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 4 23:46:23.875818 kubelet[3129]: I0904 23:46:23.875767 3129 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:46:23.876381 kubelet[3129]: I0904 23:46:23.875921 3129 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-55","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:46:23.876662 kubelet[3129]: I0904 23:46:23.876638 3129 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:46:23.878019 kubelet[3129]: I0904 23:46:23.876842 3129 container_manager_linux.go:300] "Creating device plugin manager" Sep 4 23:46:23.878019 kubelet[3129]: I0904 23:46:23.876928 3129 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:23.878019 kubelet[3129]: I0904 23:46:23.877123 3129 kubelet.go:408] "Attempting to sync node with API server" Sep 4 23:46:23.878019 kubelet[3129]: I0904 23:46:23.877151 3129 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:46:23.878019 kubelet[3129]: I0904 23:46:23.877188 3129 kubelet.go:314] "Adding apiserver pod source" Sep 4 23:46:23.878019 kubelet[3129]: I0904 23:46:23.877220 3129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:46:23.888267 kubelet[3129]: I0904 23:46:23.888227 3129 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:46:23.889302 kubelet[3129]: I0904 23:46:23.889256 3129 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:46:23.890375 kubelet[3129]: I0904 23:46:23.890338 3129 server.go:1274] "Started kubelet" Sep 4 23:46:23.896565 kubelet[3129]: I0904 23:46:23.895769 3129 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:46:23.905455 kubelet[3129]: I0904 23:46:23.905396 3129 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 4 23:46:23.908620 kubelet[3129]: I0904 23:46:23.908529 3129 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:46:23.909111 kubelet[3129]: E0904 23:46:23.909080 3129 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-55\" not found" Sep 4 23:46:23.909737 kubelet[3129]: I0904 23:46:23.909705 3129 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 4 23:46:23.910566 kubelet[3129]: I0904 23:46:23.910079 3129 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:46:23.912609 kubelet[3129]: I0904 23:46:23.910307 3129 server.go:449] "Adding debug handlers to kubelet server" Sep 4 23:46:23.918544 kubelet[3129]: I0904 23:46:23.910378 3129 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:46:23.918544 kubelet[3129]: I0904 23:46:23.917871 3129 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:46:23.918544 kubelet[3129]: I0904 23:46:23.912505 3129 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:46:23.929442 sudo[3146]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:46:23.931398 sudo[3146]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:46:23.939501 kubelet[3129]: I0904 23:46:23.939442 3129 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:46:23.939501 kubelet[3129]: I0904 23:46:23.939481 3129 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:46:23.945563 kubelet[3129]: I0904 23:46:23.942431 3129 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:46:23.946020 kubelet[3129]: I0904 23:46:23.945800 3129 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:46:23.950711 kubelet[3129]: I0904 23:46:23.950668 3129 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:46:23.952378 kubelet[3129]: I0904 23:46:23.950890 3129 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 23:46:23.952378 kubelet[3129]: I0904 23:46:23.950929 3129 kubelet.go:2321] "Starting kubelet main sync loop" Sep 4 23:46:23.952378 kubelet[3129]: E0904 23:46:23.950992 3129 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:46:23.981177 kubelet[3129]: E0904 23:46:23.980734 3129 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:46:24.021571 kubelet[3129]: E0904 23:46:24.018174 3129 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-55\" not found" Sep 4 23:46:24.055299 kubelet[3129]: E0904 23:46:24.055237 3129 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:46:24.134613 kubelet[3129]: I0904 23:46:24.134271 3129 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 23:46:24.134613 kubelet[3129]: I0904 23:46:24.134301 3129 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 23:46:24.134613 kubelet[3129]: I0904 23:46:24.134335 3129 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:46:24.134973 kubelet[3129]: I0904 23:46:24.134946 3129 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:46:24.135174 kubelet[3129]: I0904 23:46:24.135061 3129 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:46:24.135174 kubelet[3129]: I0904 23:46:24.135103 3129 policy_none.go:49] "None policy: Start" Sep 4 23:46:24.137613 kubelet[3129]: I0904 23:46:24.137066 3129 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 23:46:24.137613 kubelet[3129]: I0904 23:46:24.137113 3129 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:46:24.137613 kubelet[3129]: I0904 23:46:24.137368 3129 state_mem.go:75] "Updated machine memory state" Sep 4 23:46:24.152997 kubelet[3129]: I0904 23:46:24.151874 3129 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:46:24.152997 kubelet[3129]: I0904 23:46:24.152165 3129 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:46:24.152997 kubelet[3129]: I0904 23:46:24.152184 3129 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:46:24.153851 kubelet[3129]: I0904 23:46:24.153824 3129 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:46:24.273004 kubelet[3129]: I0904 23:46:24.272001 3129 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-55" Sep 4 23:46:24.275459 kubelet[3129]: E0904 23:46:24.273934 3129 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-55\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:24.276705 kubelet[3129]: E0904 23:46:24.275746 3129 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-55\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:24.305644 kubelet[3129]: I0904 23:46:24.303595 3129 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-55" Sep 4 23:46:24.305644 kubelet[3129]: I0904 23:46:24.303703 3129 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-55" Sep 4 23:46:24.317547 kubelet[3129]: I0904 23:46:24.317475 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:24.317956 kubelet[3129]: I0904 23:46:24.317736 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:24.317956 kubelet[3129]: I0904 23:46:24.317780 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:24.317956 kubelet[3129]: I0904 23:46:24.317815 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e83f1f5c2e474415512d8ba1ec48d28-ca-certs\") pod \"kube-apiserver-ip-172-31-23-55\" (UID: \"0e83f1f5c2e474415512d8ba1ec48d28\") " pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:24.317956 kubelet[3129]: I0904 23:46:24.317860 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:24.317956 kubelet[3129]: I0904 23:46:24.317904 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efd267465fbdd4fe97d5d28b8718af84-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-55\" (UID: \"efd267465fbdd4fe97d5d28b8718af84\") " pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:24.319151 kubelet[3129]: I0904 23:46:24.317940 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7339a588e0e65e0b2c3ddceaddfd9a9-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-55\" (UID: \"e7339a588e0e65e0b2c3ddceaddfd9a9\") " pod="kube-system/kube-scheduler-ip-172-31-23-55" Sep 4 23:46:24.319151 kubelet[3129]: I0904 23:46:24.317974 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e83f1f5c2e474415512d8ba1ec48d28-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-55\" (UID: \"0e83f1f5c2e474415512d8ba1ec48d28\") " pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:24.319151 kubelet[3129]: I0904 23:46:24.318011 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e83f1f5c2e474415512d8ba1ec48d28-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-55\" (UID: \"0e83f1f5c2e474415512d8ba1ec48d28\") " pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:24.872726 sudo[3146]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:24.890035 kubelet[3129]: I0904 23:46:24.888214 3129 apiserver.go:52] "Watching apiserver" Sep 4 23:46:24.910732 kubelet[3129]: I0904 23:46:24.910676 3129 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 4 23:46:25.104866 kubelet[3129]: E0904 23:46:25.104378 3129 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-55\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-55" Sep 4 23:46:25.111530 kubelet[3129]: E0904 23:46:25.109105 3129 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-55\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-55" Sep 4 23:46:25.158022 kubelet[3129]: I0904 23:46:25.157833 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-55" podStartSLOduration=4.157811005 podStartE2EDuration="4.157811005s" podCreationTimestamp="2025-09-04 23:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:25.139651753 +0000 UTC m=+1.387481528" watchObservedRunningTime="2025-09-04 23:46:25.157811005 +0000 UTC m=+1.405640744" Sep 4 23:46:25.181537 kubelet[3129]: I0904 23:46:25.179652 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-55" podStartSLOduration=1.179627881 podStartE2EDuration="1.179627881s" podCreationTimestamp="2025-09-04 23:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:25.159026437 +0000 UTC m=+1.406856212" watchObservedRunningTime="2025-09-04 23:46:25.179627881 +0000 UTC m=+1.427457632" Sep 4 23:46:25.204085 kubelet[3129]: I0904 23:46:25.203982 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-55" podStartSLOduration=3.203961133 podStartE2EDuration="3.203961133s" podCreationTimestamp="2025-09-04 23:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:25.180218533 +0000 UTC m=+1.428048296" watchObservedRunningTime="2025-09-04 23:46:25.203961133 +0000 UTC m=+1.451790884" Sep 4 23:46:27.795280 sudo[2192]: pam_unix(sudo:session): session closed for user root Sep 4 23:46:27.818889 sshd[2191]: Connection closed by 139.178.89.65 port 53714 Sep 4 23:46:27.819772 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:27.826249 systemd[1]: sshd@6-172.31.23.55:22-139.178.89.65:53714.service: Deactivated successfully. Sep 4 23:46:27.831957 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:46:27.832355 systemd[1]: session-7.scope: Consumed 10.862s CPU time, 261.8M memory peak. Sep 4 23:46:27.836427 systemd-logind[1847]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:46:27.841751 systemd-logind[1847]: Removed session 7. Sep 4 23:46:28.212002 kubelet[3129]: I0904 23:46:28.211871 3129 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:46:28.213683 containerd[1864]: time="2025-09-04T23:46:28.212958160Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:46:28.214247 kubelet[3129]: I0904 23:46:28.213291 3129 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:46:29.062769 update_engine[1848]: I20250904 23:46:29.062680 1848 update_attempter.cc:509] Updating boot flags... Sep 4 23:46:29.198019 systemd[1]: Created slice kubepods-besteffort-pode8f7ca08_71b3_447a_a07f_c52c851fb485.slice - libcontainer container kubepods-besteffort-pode8f7ca08_71b3_447a_a07f_c52c851fb485.slice. Sep 4 23:46:29.234936 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3216) Sep 4 23:46:29.250552 kubelet[3129]: I0904 23:46:29.249585 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-config-path\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.250552 kubelet[3129]: I0904 23:46:29.249651 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cni-path\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.250552 kubelet[3129]: I0904 23:46:29.249692 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8f7ca08-71b3-447a-a07f-c52c851fb485-kube-proxy\") pod \"kube-proxy-kkw5l\" (UID: \"e8f7ca08-71b3-447a-a07f-c52c851fb485\") " pod="kube-system/kube-proxy-kkw5l" Sep 4 23:46:29.250552 kubelet[3129]: I0904 23:46:29.249730 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f7ca08-71b3-447a-a07f-c52c851fb485-lib-modules\") pod \"kube-proxy-kkw5l\" (UID: \"e8f7ca08-71b3-447a-a07f-c52c851fb485\") " pod="kube-system/kube-proxy-kkw5l" Sep 4 23:46:29.250552 kubelet[3129]: I0904 23:46:29.249766 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-kernel\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.250552 kubelet[3129]: I0904 23:46:29.249834 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-run\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252348 kubelet[3129]: I0904 23:46:29.249871 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-cgroup\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252348 kubelet[3129]: I0904 23:46:29.249911 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9f268b8-2829-476e-8608-eafa29be8c59-clustermesh-secrets\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252348 kubelet[3129]: I0904 23:46:29.249945 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-hubble-tls\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252348 kubelet[3129]: I0904 23:46:29.249982 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-lib-modules\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252348 kubelet[3129]: I0904 23:46:29.250020 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-net\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252348 kubelet[3129]: I0904 23:46:29.250055 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-hostproc\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252762 kubelet[3129]: I0904 23:46:29.250092 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f7ca08-71b3-447a-a07f-c52c851fb485-xtables-lock\") pod \"kube-proxy-kkw5l\" (UID: \"e8f7ca08-71b3-447a-a07f-c52c851fb485\") " pod="kube-system/kube-proxy-kkw5l" Sep 4 23:46:29.252762 kubelet[3129]: I0904 23:46:29.250136 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-etc-cni-netd\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252762 kubelet[3129]: I0904 23:46:29.250170 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6wd\" (UniqueName: \"kubernetes.io/projected/e8f7ca08-71b3-447a-a07f-c52c851fb485-kube-api-access-5j6wd\") pod \"kube-proxy-kkw5l\" (UID: \"e8f7ca08-71b3-447a-a07f-c52c851fb485\") " pod="kube-system/kube-proxy-kkw5l" Sep 4 23:46:29.252762 kubelet[3129]: I0904 23:46:29.250211 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-xtables-lock\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.252762 kubelet[3129]: I0904 23:46:29.250245 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlwz\" (UniqueName: \"kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-kube-api-access-8tlwz\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.259136 kubelet[3129]: I0904 23:46:29.250287 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-bpf-maps\") pod \"cilium-5rmvn\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " pod="kube-system/cilium-5rmvn" Sep 4 23:46:29.258729 systemd[1]: Created slice kubepods-burstable-poda9f268b8_2829_476e_8608_eafa29be8c59.slice - libcontainer container kubepods-burstable-poda9f268b8_2829_476e_8608_eafa29be8c59.slice. Sep 4 23:46:29.320480 systemd[1]: Created slice kubepods-besteffort-pod332cccdd_0b37_4087_93f8_562afdfefb68.slice - libcontainer container kubepods-besteffort-pod332cccdd_0b37_4087_93f8_562afdfefb68.slice. Sep 4 23:46:29.356759 kubelet[3129]: I0904 23:46:29.356433 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/332cccdd-0b37-4087-93f8-562afdfefb68-cilium-config-path\") pod \"cilium-operator-5d85765b45-xvhrq\" (UID: \"332cccdd-0b37-4087-93f8-562afdfefb68\") " pod="kube-system/cilium-operator-5d85765b45-xvhrq" Sep 4 23:46:29.359558 kubelet[3129]: I0904 23:46:29.357585 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5vqh\" (UniqueName: \"kubernetes.io/projected/332cccdd-0b37-4087-93f8-562afdfefb68-kube-api-access-x5vqh\") pod \"cilium-operator-5d85765b45-xvhrq\" (UID: \"332cccdd-0b37-4087-93f8-562afdfefb68\") " pod="kube-system/cilium-operator-5d85765b45-xvhrq" Sep 4 23:46:29.561975 containerd[1864]: time="2025-09-04T23:46:29.561457111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkw5l,Uid:e8f7ca08-71b3-447a-a07f-c52c851fb485,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:29.583391 containerd[1864]: time="2025-09-04T23:46:29.582575995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rmvn,Uid:a9f268b8-2829-476e-8608-eafa29be8c59,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:29.645558 containerd[1864]: time="2025-09-04T23:46:29.639107887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xvhrq,Uid:332cccdd-0b37-4087-93f8-562afdfefb68,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:29.830733 containerd[1864]: time="2025-09-04T23:46:29.827278652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:29.830733 containerd[1864]: time="2025-09-04T23:46:29.827365172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:29.830733 containerd[1864]: time="2025-09-04T23:46:29.827405480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:29.830733 containerd[1864]: time="2025-09-04T23:46:29.829998296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:29.848598 containerd[1864]: time="2025-09-04T23:46:29.847552496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:29.848598 containerd[1864]: time="2025-09-04T23:46:29.847644188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:29.848598 containerd[1864]: time="2025-09-04T23:46:29.847671236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:29.848598 containerd[1864]: time="2025-09-04T23:46:29.847873304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:29.859630 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3215) Sep 4 23:46:29.907893 systemd[1]: Started cri-containerd-bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52.scope - libcontainer container bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52. Sep 4 23:46:29.975008 systemd[1]: Started cri-containerd-aa639642951d711793319f993d19b778e86135f13db40658dcdf5dfcccc8d53e.scope - libcontainer container aa639642951d711793319f993d19b778e86135f13db40658dcdf5dfcccc8d53e. Sep 4 23:46:29.993124 containerd[1864]: time="2025-09-04T23:46:29.990673053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:29.993124 containerd[1864]: time="2025-09-04T23:46:29.990772689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:29.993124 containerd[1864]: time="2025-09-04T23:46:29.990802317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:29.998203 containerd[1864]: time="2025-09-04T23:46:29.995844765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:30.083978 systemd[1]: Started cri-containerd-9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec.scope - libcontainer container 9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec. Sep 4 23:46:30.095185 containerd[1864]: time="2025-09-04T23:46:30.094355322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rmvn,Uid:a9f268b8-2829-476e-8608-eafa29be8c59,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\"" Sep 4 23:46:30.112547 containerd[1864]: time="2025-09-04T23:46:30.110946762Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:46:30.228975 containerd[1864]: time="2025-09-04T23:46:30.228788070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkw5l,Uid:e8f7ca08-71b3-447a-a07f-c52c851fb485,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa639642951d711793319f993d19b778e86135f13db40658dcdf5dfcccc8d53e\"" Sep 4 23:46:30.245427 containerd[1864]: time="2025-09-04T23:46:30.245359686Z" level=info msg="CreateContainer within sandbox \"aa639642951d711793319f993d19b778e86135f13db40658dcdf5dfcccc8d53e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:46:30.321610 containerd[1864]: time="2025-09-04T23:46:30.321506815Z" level=info msg="CreateContainer within sandbox \"aa639642951d711793319f993d19b778e86135f13db40658dcdf5dfcccc8d53e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"310584161344f853f279f584cc5d94976c199d4371840e1e5c6b20c648d406b0\"" Sep 4 23:46:30.324562 containerd[1864]: time="2025-09-04T23:46:30.323673967Z" level=info msg="StartContainer for \"310584161344f853f279f584cc5d94976c199d4371840e1e5c6b20c648d406b0\"" Sep 4 23:46:30.354009 containerd[1864]: time="2025-09-04T23:46:30.353953315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xvhrq,Uid:332cccdd-0b37-4087-93f8-562afdfefb68,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\"" Sep 4 23:46:30.388987 systemd[1]: Started cri-containerd-310584161344f853f279f584cc5d94976c199d4371840e1e5c6b20c648d406b0.scope - libcontainer container 310584161344f853f279f584cc5d94976c199d4371840e1e5c6b20c648d406b0. Sep 4 23:46:30.471339 containerd[1864]: time="2025-09-04T23:46:30.471202351Z" level=info msg="StartContainer for \"310584161344f853f279f584cc5d94976c199d4371840e1e5c6b20c648d406b0\" returns successfully" Sep 4 23:46:31.177024 kubelet[3129]: I0904 23:46:31.176930 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kkw5l" podStartSLOduration=2.176909203 podStartE2EDuration="2.176909203s" podCreationTimestamp="2025-09-04 23:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:31.175478059 +0000 UTC m=+7.423307858" watchObservedRunningTime="2025-09-04 23:46:31.176909203 +0000 UTC m=+7.424738942" Sep 4 23:46:37.109428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3607499813.mount: Deactivated successfully. Sep 4 23:46:40.906550 containerd[1864]: time="2025-09-04T23:46:40.906460927Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:40.911066 containerd[1864]: time="2025-09-04T23:46:40.911015767Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:46:40.913828 containerd[1864]: time="2025-09-04T23:46:40.913785931Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:40.918120 containerd[1864]: time="2025-09-04T23:46:40.918073039Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.807050821s" Sep 4 23:46:40.918383 containerd[1864]: time="2025-09-04T23:46:40.918353311Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:46:40.923137 containerd[1864]: time="2025-09-04T23:46:40.923079283Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:46:40.924881 containerd[1864]: time="2025-09-04T23:46:40.924788455Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:46:40.953119 containerd[1864]: time="2025-09-04T23:46:40.953055739Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\"" Sep 4 23:46:40.955831 containerd[1864]: time="2025-09-04T23:46:40.955782415Z" level=info msg="StartContainer for \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\"" Sep 4 23:46:41.011018 systemd[1]: Started cri-containerd-caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c.scope - libcontainer container caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c. Sep 4 23:46:41.065335 containerd[1864]: time="2025-09-04T23:46:41.065262148Z" level=info msg="StartContainer for \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\" returns successfully" Sep 4 23:46:41.094301 systemd[1]: cri-containerd-caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c.scope: Deactivated successfully. Sep 4 23:46:41.135061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c-rootfs.mount: Deactivated successfully. Sep 4 23:46:42.353446 containerd[1864]: time="2025-09-04T23:46:42.353113542Z" level=info msg="shim disconnected" id=caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c namespace=k8s.io Sep 4 23:46:42.353446 containerd[1864]: time="2025-09-04T23:46:42.353185542Z" level=warning msg="cleaning up after shim disconnected" id=caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c namespace=k8s.io Sep 4 23:46:42.353446 containerd[1864]: time="2025-09-04T23:46:42.353205390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:42.889757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471563883.mount: Deactivated successfully. Sep 4 23:46:43.211108 containerd[1864]: time="2025-09-04T23:46:43.210989935Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:43.248425 containerd[1864]: time="2025-09-04T23:46:43.248360491Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\"" Sep 4 23:46:43.249485 containerd[1864]: time="2025-09-04T23:46:43.249407251Z" level=info msg="StartContainer for \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\"" Sep 4 23:46:43.249780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195025606.mount: Deactivated successfully. Sep 4 23:46:43.325820 systemd[1]: Started cri-containerd-0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5.scope - libcontainer container 0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5. Sep 4 23:46:43.377869 containerd[1864]: time="2025-09-04T23:46:43.377789443Z" level=info msg="StartContainer for \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\" returns successfully" Sep 4 23:46:43.403738 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:43.404250 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:43.406025 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:43.412194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:43.413809 systemd[1]: cri-containerd-0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5.scope: Deactivated successfully. Sep 4 23:46:43.455892 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:43.499193 containerd[1864]: time="2025-09-04T23:46:43.497584868Z" level=info msg="shim disconnected" id=0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5 namespace=k8s.io Sep 4 23:46:43.499193 containerd[1864]: time="2025-09-04T23:46:43.497769344Z" level=warning msg="cleaning up after shim disconnected" id=0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5 namespace=k8s.io Sep 4 23:46:43.499193 containerd[1864]: time="2025-09-04T23:46:43.497791700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:43.875273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5-rootfs.mount: Deactivated successfully. Sep 4 23:46:44.221241 containerd[1864]: time="2025-09-04T23:46:44.220961144Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:44.275914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710146958.mount: Deactivated successfully. Sep 4 23:46:44.289467 containerd[1864]: time="2025-09-04T23:46:44.288877460Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\"" Sep 4 23:46:44.306889 containerd[1864]: time="2025-09-04T23:46:44.306819944Z" level=info msg="StartContainer for \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\"" Sep 4 23:46:44.414376 systemd[1]: Started cri-containerd-da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111.scope - libcontainer container da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111. Sep 4 23:46:44.516884 containerd[1864]: time="2025-09-04T23:46:44.515654841Z" level=info msg="StartContainer for \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\" returns successfully" Sep 4 23:46:44.523267 systemd[1]: cri-containerd-da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111.scope: Deactivated successfully. Sep 4 23:46:44.672603 containerd[1864]: time="2025-09-04T23:46:44.672452242Z" level=info msg="shim disconnected" id=da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111 namespace=k8s.io Sep 4 23:46:44.672603 containerd[1864]: time="2025-09-04T23:46:44.672588058Z" level=warning msg="cleaning up after shim disconnected" id=da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111 namespace=k8s.io Sep 4 23:46:44.672603 containerd[1864]: time="2025-09-04T23:46:44.672611698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:44.721302 containerd[1864]: time="2025-09-04T23:46:44.721226254Z" level=warning msg="cleanup warnings time=\"2025-09-04T23:46:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 23:46:44.855060 containerd[1864]: time="2025-09-04T23:46:44.853222091Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:44.856154 containerd[1864]: time="2025-09-04T23:46:44.856087043Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:46:44.858866 containerd[1864]: time="2025-09-04T23:46:44.858822635Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:44.861386 containerd[1864]: time="2025-09-04T23:46:44.861322151Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.938172764s" Sep 4 23:46:44.861568 containerd[1864]: time="2025-09-04T23:46:44.861383507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:46:44.867097 containerd[1864]: time="2025-09-04T23:46:44.867018071Z" level=info msg="CreateContainer within sandbox \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:44.877273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111-rootfs.mount: Deactivated successfully. Sep 4 23:46:44.899171 containerd[1864]: time="2025-09-04T23:46:44.899093591Z" level=info msg="CreateContainer within sandbox \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\"" Sep 4 23:46:44.900428 containerd[1864]: time="2025-09-04T23:46:44.900364499Z" level=info msg="StartContainer for \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\"" Sep 4 23:46:44.952144 systemd[1]: run-containerd-runc-k8s.io-99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9-runc.cdmNJd.mount: Deactivated successfully. Sep 4 23:46:44.961849 systemd[1]: Started cri-containerd-99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9.scope - libcontainer container 99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9. Sep 4 23:46:45.026273 containerd[1864]: time="2025-09-04T23:46:45.025993904Z" level=info msg="StartContainer for \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\" returns successfully" Sep 4 23:46:45.255085 containerd[1864]: time="2025-09-04T23:46:45.254888949Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:45.294378 containerd[1864]: time="2025-09-04T23:46:45.294270273Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\"" Sep 4 23:46:45.300053 containerd[1864]: time="2025-09-04T23:46:45.297609129Z" level=info msg="StartContainer for \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\"" Sep 4 23:46:45.410853 systemd[1]: Started cri-containerd-48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917.scope - libcontainer container 48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917. Sep 4 23:46:45.503765 systemd[1]: cri-containerd-48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917.scope: Deactivated successfully. Sep 4 23:46:45.513481 containerd[1864]: time="2025-09-04T23:46:45.513119038Z" level=info msg="StartContainer for \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\" returns successfully" Sep 4 23:46:45.601606 containerd[1864]: time="2025-09-04T23:46:45.601467203Z" level=info msg="shim disconnected" id=48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917 namespace=k8s.io Sep 4 23:46:45.602610 containerd[1864]: time="2025-09-04T23:46:45.601632851Z" level=warning msg="cleaning up after shim disconnected" id=48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917 namespace=k8s.io Sep 4 23:46:45.602610 containerd[1864]: time="2025-09-04T23:46:45.601660955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:46.268798 containerd[1864]: time="2025-09-04T23:46:46.268502770Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:46.319125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347203475.mount: Deactivated successfully. Sep 4 23:46:46.325164 containerd[1864]: time="2025-09-04T23:46:46.324994690Z" level=info msg="CreateContainer within sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\"" Sep 4 23:46:46.325884 containerd[1864]: time="2025-09-04T23:46:46.325808530Z" level=info msg="StartContainer for \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\"" Sep 4 23:46:46.373695 kubelet[3129]: I0904 23:46:46.373616 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xvhrq" podStartSLOduration=2.867490006 podStartE2EDuration="17.373595674s" podCreationTimestamp="2025-09-04 23:46:29 +0000 UTC" firstStartedPulling="2025-09-04 23:46:30.357595711 +0000 UTC m=+6.605425462" lastFinishedPulling="2025-09-04 23:46:44.863701403 +0000 UTC m=+21.111531130" observedRunningTime="2025-09-04 23:46:45.343284273 +0000 UTC m=+21.591114108" watchObservedRunningTime="2025-09-04 23:46:46.373595674 +0000 UTC m=+22.621425425" Sep 4 23:46:46.421037 systemd[1]: Started cri-containerd-bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4.scope - libcontainer container bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4. Sep 4 23:46:46.580386 containerd[1864]: time="2025-09-04T23:46:46.580202255Z" level=info msg="StartContainer for \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\" returns successfully" Sep 4 23:46:46.934780 kubelet[3129]: I0904 23:46:46.932624 3129 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 4 23:46:47.051121 systemd[1]: Created slice kubepods-burstable-podafd2a26e_28f4_47ea_beb9_a7a212d756fd.slice - libcontainer container kubepods-burstable-podafd2a26e_28f4_47ea_beb9_a7a212d756fd.slice. Sep 4 23:46:47.077934 systemd[1]: Created slice kubepods-burstable-pod9d2ae859_6b31_447a_ad48_90801f5dd8d3.slice - libcontainer container kubepods-burstable-pod9d2ae859_6b31_447a_ad48_90801f5dd8d3.slice. Sep 4 23:46:47.108850 kubelet[3129]: I0904 23:46:47.107473 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2ae859-6b31-447a-ad48-90801f5dd8d3-config-volume\") pod \"coredns-7c65d6cfc9-tmhr4\" (UID: \"9d2ae859-6b31-447a-ad48-90801f5dd8d3\") " pod="kube-system/coredns-7c65d6cfc9-tmhr4" Sep 4 23:46:47.108850 kubelet[3129]: I0904 23:46:47.107561 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afd2a26e-28f4-47ea-beb9-a7a212d756fd-config-volume\") pod \"coredns-7c65d6cfc9-h79dx\" (UID: \"afd2a26e-28f4-47ea-beb9-a7a212d756fd\") " pod="kube-system/coredns-7c65d6cfc9-h79dx" Sep 4 23:46:47.108850 kubelet[3129]: I0904 23:46:47.107610 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48lm\" (UniqueName: \"kubernetes.io/projected/9d2ae859-6b31-447a-ad48-90801f5dd8d3-kube-api-access-p48lm\") pod \"coredns-7c65d6cfc9-tmhr4\" (UID: \"9d2ae859-6b31-447a-ad48-90801f5dd8d3\") " pod="kube-system/coredns-7c65d6cfc9-tmhr4" Sep 4 23:46:47.108850 kubelet[3129]: I0904 23:46:47.107651 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlzdn\" (UniqueName: \"kubernetes.io/projected/afd2a26e-28f4-47ea-beb9-a7a212d756fd-kube-api-access-nlzdn\") pod \"coredns-7c65d6cfc9-h79dx\" (UID: \"afd2a26e-28f4-47ea-beb9-a7a212d756fd\") " pod="kube-system/coredns-7c65d6cfc9-h79dx" Sep 4 23:46:47.361586 containerd[1864]: time="2025-09-04T23:46:47.361500059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h79dx,Uid:afd2a26e-28f4-47ea-beb9-a7a212d756fd,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:47.391550 containerd[1864]: time="2025-09-04T23:46:47.388892735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmhr4,Uid:9d2ae859-6b31-447a-ad48-90801f5dd8d3,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:50.015692 (udev-worker)[4114]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:50.017050 systemd-networkd[1776]: cilium_host: Link UP Sep 4 23:46:50.017983 systemd-networkd[1776]: cilium_net: Link UP Sep 4 23:46:50.018498 systemd-networkd[1776]: cilium_net: Gained carrier Sep 4 23:46:50.019036 (udev-worker)[4116]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:50.020194 systemd-networkd[1776]: cilium_host: Gained carrier Sep 4 23:46:50.020425 systemd-networkd[1776]: cilium_net: Gained IPv6LL Sep 4 23:46:50.020758 systemd-networkd[1776]: cilium_host: Gained IPv6LL Sep 4 23:46:50.203418 systemd-networkd[1776]: cilium_vxlan: Link UP Sep 4 23:46:50.203431 systemd-networkd[1776]: cilium_vxlan: Gained carrier Sep 4 23:46:50.762667 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:51.869847 systemd-networkd[1776]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:52.123765 systemd-networkd[1776]: lxc_health: Link UP Sep 4 23:46:52.135045 systemd-networkd[1776]: lxc_health: Gained carrier Sep 4 23:46:52.477659 kernel: eth0: renamed from tmp3d51f Sep 4 23:46:52.485208 systemd-networkd[1776]: lxce477bc8e862b: Link UP Sep 4 23:46:52.485898 systemd-networkd[1776]: lxce477bc8e862b: Gained carrier Sep 4 23:46:52.546935 systemd-networkd[1776]: lxc913576311fc9: Link UP Sep 4 23:46:52.556690 kernel: eth0: renamed from tmp224c6 Sep 4 23:46:52.561653 systemd-networkd[1776]: lxc913576311fc9: Gained carrier Sep 4 23:46:53.619553 kubelet[3129]: I0904 23:46:53.619276 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5rmvn" podStartSLOduration=13.807359289 podStartE2EDuration="24.619255278s" podCreationTimestamp="2025-09-04 23:46:29 +0000 UTC" firstStartedPulling="2025-09-04 23:46:30.107828958 +0000 UTC m=+6.355658685" lastFinishedPulling="2025-09-04 23:46:40.919724947 +0000 UTC m=+17.167554674" observedRunningTime="2025-09-04 23:46:47.338285747 +0000 UTC m=+23.586115486" watchObservedRunningTime="2025-09-04 23:46:53.619255278 +0000 UTC m=+29.867085029" Sep 4 23:46:53.660801 systemd-networkd[1776]: lxc913576311fc9: Gained IPv6LL Sep 4 23:46:53.852882 systemd-networkd[1776]: lxc_health: Gained IPv6LL Sep 4 23:46:54.045847 systemd-networkd[1776]: lxce477bc8e862b: Gained IPv6LL Sep 4 23:46:57.032075 ntpd[1838]: Listen normally on 7 cilium_host 192.168.0.152:123 Sep 4 23:46:57.033183 ntpd[1838]: 4 Sep 23:46:57 ntpd[1838]: Listen normally on 7 cilium_host 192.168.0.152:123 Sep 4 23:46:57.033183 ntpd[1838]: 4 Sep 23:46:57 ntpd[1838]: Listen normally on 8 cilium_net [fe80::b8e1:e0ff:fe75:89a%4]:123 Sep 4 23:46:57.033183 ntpd[1838]: 4 Sep 23:46:57 ntpd[1838]: Listen normally on 9 cilium_host [fe80::7cf4:deff:fe14:4457%5]:123 Sep 4 23:46:57.033183 ntpd[1838]: 4 Sep 23:46:57 ntpd[1838]: Listen normally on 10 cilium_vxlan [fe80::e84b:89ff:fec6:a4c4%6]:123 Sep 4 23:46:57.033183 ntpd[1838]: 4 Sep 23:46:57 ntpd[1838]: Listen normally on 11 lxc_health [fe80::c815:58ff:fe95:2f2%8]:123 Sep 4 23:46:57.033183 ntpd[1838]: 4 Sep 23:46:57 ntpd[1838]: Listen normally on 12 lxce477bc8e862b [fe80::5cc0:d1ff:fe3d:4f04%10]:123 Sep 4 23:46:57.033183 ntpd[1838]: 4 Sep 23:46:57 ntpd[1838]: Listen normally on 13 lxc913576311fc9 [fe80::f88e:38ff:fe55:cbaa%12]:123 Sep 4 23:46:57.032204 ntpd[1838]: Listen normally on 8 cilium_net [fe80::b8e1:e0ff:fe75:89a%4]:123 Sep 4 23:46:57.032285 ntpd[1838]: Listen normally on 9 cilium_host [fe80::7cf4:deff:fe14:4457%5]:123 Sep 4 23:46:57.032353 ntpd[1838]: Listen normally on 10 cilium_vxlan [fe80::e84b:89ff:fec6:a4c4%6]:123 Sep 4 23:46:57.032421 ntpd[1838]: Listen normally on 11 lxc_health [fe80::c815:58ff:fe95:2f2%8]:123 Sep 4 23:46:57.032488 ntpd[1838]: Listen normally on 12 lxce477bc8e862b [fe80::5cc0:d1ff:fe3d:4f04%10]:123 Sep 4 23:46:57.032618 ntpd[1838]: Listen normally on 13 lxc913576311fc9 [fe80::f88e:38ff:fe55:cbaa%12]:123 Sep 4 23:47:00.859663 containerd[1864]: time="2025-09-04T23:47:00.856435430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:47:00.859663 containerd[1864]: time="2025-09-04T23:47:00.857370002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:47:00.859663 containerd[1864]: time="2025-09-04T23:47:00.857421998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:00.859663 containerd[1864]: time="2025-09-04T23:47:00.859286990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:00.929847 systemd[1]: Started cri-containerd-224c646ec9db5c4500a31409cdaf32dd4be83a44e290af53c0bc9eb705a4736b.scope - libcontainer container 224c646ec9db5c4500a31409cdaf32dd4be83a44e290af53c0bc9eb705a4736b. Sep 4 23:47:00.959435 containerd[1864]: time="2025-09-04T23:47:00.959252583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:47:00.959435 containerd[1864]: time="2025-09-04T23:47:00.959369907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:47:00.959747 containerd[1864]: time="2025-09-04T23:47:00.959400483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:00.960076 containerd[1864]: time="2025-09-04T23:47:00.959966535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:01.026868 systemd[1]: Started cri-containerd-3d51ffb3a7cb71dc7ef82eaced0b0093a2a1ffb451934f3a42a044fbd20643f9.scope - libcontainer container 3d51ffb3a7cb71dc7ef82eaced0b0093a2a1ffb451934f3a42a044fbd20643f9. Sep 4 23:47:01.067098 containerd[1864]: time="2025-09-04T23:47:01.066677603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmhr4,Uid:9d2ae859-6b31-447a-ad48-90801f5dd8d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"224c646ec9db5c4500a31409cdaf32dd4be83a44e290af53c0bc9eb705a4736b\"" Sep 4 23:47:01.075290 containerd[1864]: time="2025-09-04T23:47:01.075218183Z" level=info msg="CreateContainer within sandbox \"224c646ec9db5c4500a31409cdaf32dd4be83a44e290af53c0bc9eb705a4736b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:47:01.109844 containerd[1864]: time="2025-09-04T23:47:01.109563384Z" level=info msg="CreateContainer within sandbox \"224c646ec9db5c4500a31409cdaf32dd4be83a44e290af53c0bc9eb705a4736b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9163afa7160927c2a76bd204cf810812f0937fbcc72b2c87b5c5057c7d83958\"" Sep 4 23:47:01.113154 containerd[1864]: time="2025-09-04T23:47:01.110795052Z" level=info msg="StartContainer for \"d9163afa7160927c2a76bd204cf810812f0937fbcc72b2c87b5c5057c7d83958\"" Sep 4 23:47:01.170775 containerd[1864]: time="2025-09-04T23:47:01.170704224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h79dx,Uid:afd2a26e-28f4-47ea-beb9-a7a212d756fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d51ffb3a7cb71dc7ef82eaced0b0093a2a1ffb451934f3a42a044fbd20643f9\"" Sep 4 23:47:01.187475 containerd[1864]: time="2025-09-04T23:47:01.187401324Z" level=info msg="CreateContainer within sandbox \"3d51ffb3a7cb71dc7ef82eaced0b0093a2a1ffb451934f3a42a044fbd20643f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:47:01.200675 systemd[1]: Started cri-containerd-d9163afa7160927c2a76bd204cf810812f0937fbcc72b2c87b5c5057c7d83958.scope - libcontainer container d9163afa7160927c2a76bd204cf810812f0937fbcc72b2c87b5c5057c7d83958. Sep 4 23:47:01.239937 containerd[1864]: time="2025-09-04T23:47:01.239665488Z" level=info msg="CreateContainer within sandbox \"3d51ffb3a7cb71dc7ef82eaced0b0093a2a1ffb451934f3a42a044fbd20643f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"457dc4b8c02b34487f6a78e7a0bbd5097e8f415af39c0f536b633e5aeee417aa\"" Sep 4 23:47:01.243482 containerd[1864]: time="2025-09-04T23:47:01.242954328Z" level=info msg="StartContainer for \"457dc4b8c02b34487f6a78e7a0bbd5097e8f415af39c0f536b633e5aeee417aa\"" Sep 4 23:47:01.302570 containerd[1864]: time="2025-09-04T23:47:01.302333965Z" level=info msg="StartContainer for \"d9163afa7160927c2a76bd204cf810812f0937fbcc72b2c87b5c5057c7d83958\" returns successfully" Sep 4 23:47:01.353445 systemd[1]: Started cri-containerd-457dc4b8c02b34487f6a78e7a0bbd5097e8f415af39c0f536b633e5aeee417aa.scope - libcontainer container 457dc4b8c02b34487f6a78e7a0bbd5097e8f415af39c0f536b633e5aeee417aa. Sep 4 23:47:01.461847 containerd[1864]: time="2025-09-04T23:47:01.460897117Z" level=info msg="StartContainer for \"457dc4b8c02b34487f6a78e7a0bbd5097e8f415af39c0f536b633e5aeee417aa\" returns successfully" Sep 4 23:47:02.372306 kubelet[3129]: I0904 23:47:02.372207 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tmhr4" podStartSLOduration=33.372182642 podStartE2EDuration="33.372182642s" podCreationTimestamp="2025-09-04 23:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:47:01.374174809 +0000 UTC m=+37.622004560" watchObservedRunningTime="2025-09-04 23:47:02.372182642 +0000 UTC m=+38.620012381" Sep 4 23:47:02.395907 kubelet[3129]: I0904 23:47:02.394321 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h79dx" podStartSLOduration=33.394297814 podStartE2EDuration="33.394297814s" podCreationTimestamp="2025-09-04 23:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:47:02.373188686 +0000 UTC m=+38.621018449" watchObservedRunningTime="2025-09-04 23:47:02.394297814 +0000 UTC m=+38.642127553" Sep 4 23:47:03.870027 systemd[1]: Started sshd@7-172.31.23.55:22-139.178.89.65:41386.service - OpenSSH per-connection server daemon (139.178.89.65:41386). Sep 4 23:47:04.056900 sshd[4692]: Accepted publickey for core from 139.178.89.65 port 41386 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:04.058249 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:04.065946 systemd-logind[1847]: New session 8 of user core. Sep 4 23:47:04.072843 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:47:04.339934 sshd[4694]: Connection closed by 139.178.89.65 port 41386 Sep 4 23:47:04.340853 sshd-session[4692]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:04.348245 systemd[1]: sshd@7-172.31.23.55:22-139.178.89.65:41386.service: Deactivated successfully. Sep 4 23:47:04.352804 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:47:04.354720 systemd-logind[1847]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:47:04.359657 systemd-logind[1847]: Removed session 8. Sep 4 23:47:09.387024 systemd[1]: Started sshd@8-172.31.23.55:22-139.178.89.65:41400.service - OpenSSH per-connection server daemon (139.178.89.65:41400). Sep 4 23:47:09.576909 sshd[4709]: Accepted publickey for core from 139.178.89.65 port 41400 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:09.579576 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:09.588896 systemd-logind[1847]: New session 9 of user core. Sep 4 23:47:09.594808 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:47:09.840273 sshd[4711]: Connection closed by 139.178.89.65 port 41400 Sep 4 23:47:09.840146 sshd-session[4709]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:09.845619 systemd-logind[1847]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:47:09.847127 systemd[1]: sshd@8-172.31.23.55:22-139.178.89.65:41400.service: Deactivated successfully. Sep 4 23:47:09.851297 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:47:09.857210 systemd-logind[1847]: Removed session 9. Sep 4 23:47:14.882071 systemd[1]: Started sshd@9-172.31.23.55:22-139.178.89.65:44912.service - OpenSSH per-connection server daemon (139.178.89.65:44912). Sep 4 23:47:15.062797 sshd[4724]: Accepted publickey for core from 139.178.89.65 port 44912 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:15.065409 sshd-session[4724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:15.075704 systemd-logind[1847]: New session 10 of user core. Sep 4 23:47:15.079831 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:47:15.327784 sshd[4726]: Connection closed by 139.178.89.65 port 44912 Sep 4 23:47:15.328648 sshd-session[4724]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:15.335174 systemd[1]: sshd@9-172.31.23.55:22-139.178.89.65:44912.service: Deactivated successfully. Sep 4 23:47:15.340207 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:47:15.342064 systemd-logind[1847]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:47:15.343819 systemd-logind[1847]: Removed session 10. Sep 4 23:47:20.377009 systemd[1]: Started sshd@10-172.31.23.55:22-139.178.89.65:50176.service - OpenSSH per-connection server daemon (139.178.89.65:50176). Sep 4 23:47:20.560333 sshd[4738]: Accepted publickey for core from 139.178.89.65 port 50176 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:20.563757 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:20.572851 systemd-logind[1847]: New session 11 of user core. Sep 4 23:47:20.581835 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:47:20.829623 sshd[4740]: Connection closed by 139.178.89.65 port 50176 Sep 4 23:47:20.832833 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:20.839452 systemd[1]: sshd@10-172.31.23.55:22-139.178.89.65:50176.service: Deactivated successfully. Sep 4 23:47:20.846413 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:47:20.848501 systemd-logind[1847]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:47:20.852348 systemd-logind[1847]: Removed session 11. Sep 4 23:47:25.875985 systemd[1]: Started sshd@11-172.31.23.55:22-139.178.89.65:50188.service - OpenSSH per-connection server daemon (139.178.89.65:50188). Sep 4 23:47:26.079150 sshd[4755]: Accepted publickey for core from 139.178.89.65 port 50188 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:26.082417 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:26.091410 systemd-logind[1847]: New session 12 of user core. Sep 4 23:47:26.099807 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:47:26.340698 sshd[4757]: Connection closed by 139.178.89.65 port 50188 Sep 4 23:47:26.341752 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:26.348140 systemd[1]: sshd@11-172.31.23.55:22-139.178.89.65:50188.service: Deactivated successfully. Sep 4 23:47:26.353467 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:47:26.355121 systemd-logind[1847]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:47:26.357338 systemd-logind[1847]: Removed session 12. Sep 4 23:47:26.378078 systemd[1]: Started sshd@12-172.31.23.55:22-139.178.89.65:50198.service - OpenSSH per-connection server daemon (139.178.89.65:50198). Sep 4 23:47:26.571090 sshd[4770]: Accepted publickey for core from 139.178.89.65 port 50198 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:26.573544 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:26.583308 systemd-logind[1847]: New session 13 of user core. Sep 4 23:47:26.588865 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:47:26.902197 sshd[4772]: Connection closed by 139.178.89.65 port 50198 Sep 4 23:47:26.904329 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:26.915922 systemd[1]: sshd@12-172.31.23.55:22-139.178.89.65:50198.service: Deactivated successfully. Sep 4 23:47:26.923506 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:47:26.929186 systemd-logind[1847]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:47:26.952245 systemd[1]: Started sshd@13-172.31.23.55:22-139.178.89.65:50210.service - OpenSSH per-connection server daemon (139.178.89.65:50210). Sep 4 23:47:26.955595 systemd-logind[1847]: Removed session 13. Sep 4 23:47:27.154860 sshd[4781]: Accepted publickey for core from 139.178.89.65 port 50210 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:27.157717 sshd-session[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:27.166213 systemd-logind[1847]: New session 14 of user core. Sep 4 23:47:27.174740 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:47:27.426800 sshd[4784]: Connection closed by 139.178.89.65 port 50210 Sep 4 23:47:27.427903 sshd-session[4781]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:27.434303 systemd[1]: sshd@13-172.31.23.55:22-139.178.89.65:50210.service: Deactivated successfully. Sep 4 23:47:27.439021 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:47:27.442895 systemd-logind[1847]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:47:27.445613 systemd-logind[1847]: Removed session 14. Sep 4 23:47:32.470055 systemd[1]: Started sshd@14-172.31.23.55:22-139.178.89.65:60184.service - OpenSSH per-connection server daemon (139.178.89.65:60184). Sep 4 23:47:32.654856 sshd[4798]: Accepted publickey for core from 139.178.89.65 port 60184 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:32.657579 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:32.666570 systemd-logind[1847]: New session 15 of user core. Sep 4 23:47:32.673813 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:47:32.911585 sshd[4800]: Connection closed by 139.178.89.65 port 60184 Sep 4 23:47:32.911316 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:32.916899 systemd[1]: sshd@14-172.31.23.55:22-139.178.89.65:60184.service: Deactivated successfully. Sep 4 23:47:32.921248 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:47:32.926723 systemd-logind[1847]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:47:32.929612 systemd-logind[1847]: Removed session 15. Sep 4 23:47:37.953254 systemd[1]: Started sshd@15-172.31.23.55:22-139.178.89.65:60188.service - OpenSSH per-connection server daemon (139.178.89.65:60188). Sep 4 23:47:38.148153 sshd[4812]: Accepted publickey for core from 139.178.89.65 port 60188 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:38.150853 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:38.159849 systemd-logind[1847]: New session 16 of user core. Sep 4 23:47:38.166800 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:47:38.407226 sshd[4814]: Connection closed by 139.178.89.65 port 60188 Sep 4 23:47:38.408143 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:38.414317 systemd-logind[1847]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:47:38.415848 systemd[1]: sshd@15-172.31.23.55:22-139.178.89.65:60188.service: Deactivated successfully. Sep 4 23:47:38.420679 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:47:38.422870 systemd-logind[1847]: Removed session 16. Sep 4 23:47:43.449047 systemd[1]: Started sshd@16-172.31.23.55:22-139.178.89.65:54896.service - OpenSSH per-connection server daemon (139.178.89.65:54896). Sep 4 23:47:43.639458 sshd[4826]: Accepted publickey for core from 139.178.89.65 port 54896 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:43.645049 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:43.656597 systemd-logind[1847]: New session 17 of user core. Sep 4 23:47:43.663820 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:47:43.906581 sshd[4828]: Connection closed by 139.178.89.65 port 54896 Sep 4 23:47:43.906424 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:43.911267 systemd[1]: sshd@16-172.31.23.55:22-139.178.89.65:54896.service: Deactivated successfully. Sep 4 23:47:43.916800 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:47:43.921871 systemd-logind[1847]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:47:43.923902 systemd-logind[1847]: Removed session 17. Sep 4 23:47:43.951105 systemd[1]: Started sshd@17-172.31.23.55:22-139.178.89.65:54904.service - OpenSSH per-connection server daemon (139.178.89.65:54904). Sep 4 23:47:44.147256 sshd[4839]: Accepted publickey for core from 139.178.89.65 port 54904 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:44.150245 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:44.159309 systemd-logind[1847]: New session 18 of user core. Sep 4 23:47:44.163800 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:47:44.489723 sshd[4841]: Connection closed by 139.178.89.65 port 54904 Sep 4 23:47:44.489803 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:44.495554 systemd[1]: sshd@17-172.31.23.55:22-139.178.89.65:54904.service: Deactivated successfully. Sep 4 23:47:44.495694 systemd-logind[1847]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:47:44.499966 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:47:44.504872 systemd-logind[1847]: Removed session 18. Sep 4 23:47:44.534046 systemd[1]: Started sshd@18-172.31.23.55:22-139.178.89.65:54920.service - OpenSSH per-connection server daemon (139.178.89.65:54920). Sep 4 23:47:44.736598 sshd[4851]: Accepted publickey for core from 139.178.89.65 port 54920 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:44.739082 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:44.748032 systemd-logind[1847]: New session 19 of user core. Sep 4 23:47:44.754785 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:47:47.171554 sshd[4853]: Connection closed by 139.178.89.65 port 54920 Sep 4 23:47:47.171400 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:47.182128 systemd[1]: sshd@18-172.31.23.55:22-139.178.89.65:54920.service: Deactivated successfully. Sep 4 23:47:47.187445 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:47:47.194857 systemd-logind[1847]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:47:47.227081 systemd[1]: Started sshd@19-172.31.23.55:22-139.178.89.65:54932.service - OpenSSH per-connection server daemon (139.178.89.65:54932). Sep 4 23:47:47.229630 systemd-logind[1847]: Removed session 19. Sep 4 23:47:47.420425 sshd[4870]: Accepted publickey for core from 139.178.89.65 port 54932 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:47.422938 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:47.432562 systemd-logind[1847]: New session 20 of user core. Sep 4 23:47:47.440811 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:47:47.919456 sshd[4873]: Connection closed by 139.178.89.65 port 54932 Sep 4 23:47:47.920389 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:47.929000 systemd[1]: sshd@19-172.31.23.55:22-139.178.89.65:54932.service: Deactivated successfully. Sep 4 23:47:47.934931 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:47:47.937041 systemd-logind[1847]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:47:47.940035 systemd-logind[1847]: Removed session 20. Sep 4 23:47:47.964123 systemd[1]: Started sshd@20-172.31.23.55:22-139.178.89.65:54948.service - OpenSSH per-connection server daemon (139.178.89.65:54948). Sep 4 23:47:48.153339 sshd[4883]: Accepted publickey for core from 139.178.89.65 port 54948 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:48.156084 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:48.164188 systemd-logind[1847]: New session 21 of user core. Sep 4 23:47:48.172829 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:47:48.415427 sshd[4885]: Connection closed by 139.178.89.65 port 54948 Sep 4 23:47:48.415197 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:48.421055 systemd-logind[1847]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:47:48.421751 systemd[1]: sshd@20-172.31.23.55:22-139.178.89.65:54948.service: Deactivated successfully. Sep 4 23:47:48.427883 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:47:48.433585 systemd-logind[1847]: Removed session 21. Sep 4 23:47:53.458152 systemd[1]: Started sshd@21-172.31.23.55:22-139.178.89.65:35244.service - OpenSSH per-connection server daemon (139.178.89.65:35244). Sep 4 23:47:53.645562 sshd[4898]: Accepted publickey for core from 139.178.89.65 port 35244 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:53.648384 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:53.660870 systemd-logind[1847]: New session 22 of user core. Sep 4 23:47:53.668832 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:47:53.906026 sshd[4900]: Connection closed by 139.178.89.65 port 35244 Sep 4 23:47:53.907225 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:53.914262 systemd[1]: sshd@21-172.31.23.55:22-139.178.89.65:35244.service: Deactivated successfully. Sep 4 23:47:53.920157 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:47:53.922155 systemd-logind[1847]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:47:53.924382 systemd-logind[1847]: Removed session 22. Sep 4 23:47:58.951000 systemd[1]: Started sshd@22-172.31.23.55:22-139.178.89.65:35248.service - OpenSSH per-connection server daemon (139.178.89.65:35248). Sep 4 23:47:59.132683 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 35248 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:59.135193 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:59.143793 systemd-logind[1847]: New session 23 of user core. Sep 4 23:47:59.151783 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:47:59.387066 sshd[4917]: Connection closed by 139.178.89.65 port 35248 Sep 4 23:47:59.388018 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:59.394310 systemd-logind[1847]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:47:59.394632 systemd[1]: sshd@22-172.31.23.55:22-139.178.89.65:35248.service: Deactivated successfully. Sep 4 23:47:59.398288 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:47:59.402404 systemd-logind[1847]: Removed session 23. Sep 4 23:48:04.433094 systemd[1]: Started sshd@23-172.31.23.55:22-139.178.89.65:39092.service - OpenSSH per-connection server daemon (139.178.89.65:39092). Sep 4 23:48:04.623538 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 39092 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:48:04.626003 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:04.638644 systemd-logind[1847]: New session 24 of user core. Sep 4 23:48:04.648795 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:48:04.891447 sshd[4933]: Connection closed by 139.178.89.65 port 39092 Sep 4 23:48:04.892600 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:04.898024 systemd[1]: sshd@23-172.31.23.55:22-139.178.89.65:39092.service: Deactivated successfully. Sep 4 23:48:04.902179 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:48:04.906346 systemd-logind[1847]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:48:04.908406 systemd-logind[1847]: Removed session 24. Sep 4 23:48:09.937981 systemd[1]: Started sshd@24-172.31.23.55:22-139.178.89.65:58832.service - OpenSSH per-connection server daemon (139.178.89.65:58832). Sep 4 23:48:10.123542 sshd[4945]: Accepted publickey for core from 139.178.89.65 port 58832 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:48:10.126122 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:10.135717 systemd-logind[1847]: New session 25 of user core. Sep 4 23:48:10.142827 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:48:10.380391 sshd[4947]: Connection closed by 139.178.89.65 port 58832 Sep 4 23:48:10.380170 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:10.387581 systemd-logind[1847]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:48:10.388620 systemd[1]: sshd@24-172.31.23.55:22-139.178.89.65:58832.service: Deactivated successfully. Sep 4 23:48:10.393920 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:48:10.397091 systemd-logind[1847]: Removed session 25. Sep 4 23:48:10.422029 systemd[1]: Started sshd@25-172.31.23.55:22-139.178.89.65:58838.service - OpenSSH per-connection server daemon (139.178.89.65:58838). Sep 4 23:48:10.605497 sshd[4959]: Accepted publickey for core from 139.178.89.65 port 58838 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:48:10.608009 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:10.617419 systemd-logind[1847]: New session 26 of user core. Sep 4 23:48:10.620802 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:48:15.615655 containerd[1864]: time="2025-09-04T23:48:15.615432290Z" level=info msg="StopContainer for \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\" with timeout 30 (s)" Sep 4 23:48:15.619665 containerd[1864]: time="2025-09-04T23:48:15.619115666Z" level=info msg="Stop container \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\" with signal terminated" Sep 4 23:48:15.667690 systemd[1]: run-containerd-runc-k8s.io-bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4-runc.ynaoCz.mount: Deactivated successfully. Sep 4 23:48:15.672734 systemd[1]: cri-containerd-99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9.scope: Deactivated successfully. Sep 4 23:48:15.712564 containerd[1864]: time="2025-09-04T23:48:15.712391738Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:48:15.738005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9-rootfs.mount: Deactivated successfully. Sep 4 23:48:15.746098 containerd[1864]: time="2025-09-04T23:48:15.745973366Z" level=info msg="StopContainer for \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\" with timeout 2 (s)" Sep 4 23:48:15.747637 containerd[1864]: time="2025-09-04T23:48:15.747200978Z" level=info msg="Stop container \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\" with signal terminated" Sep 4 23:48:15.752350 containerd[1864]: time="2025-09-04T23:48:15.752125910Z" level=info msg="shim disconnected" id=99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9 namespace=k8s.io Sep 4 23:48:15.752350 containerd[1864]: time="2025-09-04T23:48:15.752291630Z" level=warning msg="cleaning up after shim disconnected" id=99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9 namespace=k8s.io Sep 4 23:48:15.752730 containerd[1864]: time="2025-09-04T23:48:15.752314106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:15.765204 systemd-networkd[1776]: lxc_health: Link DOWN Sep 4 23:48:15.765219 systemd-networkd[1776]: lxc_health: Lost carrier Sep 4 23:48:15.789240 systemd[1]: cri-containerd-bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4.scope: Deactivated successfully. Sep 4 23:48:15.791697 systemd[1]: cri-containerd-bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4.scope: Consumed 14.528s CPU time, 126.5M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:48:15.807015 containerd[1864]: time="2025-09-04T23:48:15.806953827Z" level=info msg="StopContainer for \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\" returns successfully" Sep 4 23:48:15.809441 containerd[1864]: time="2025-09-04T23:48:15.809243895Z" level=info msg="StopPodSandbox for \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\"" Sep 4 23:48:15.809810 containerd[1864]: time="2025-09-04T23:48:15.809672763Z" level=info msg="Container to stop \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:15.816762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec-shm.mount: Deactivated successfully. Sep 4 23:48:15.836100 systemd[1]: cri-containerd-9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec.scope: Deactivated successfully. Sep 4 23:48:15.845184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4-rootfs.mount: Deactivated successfully. Sep 4 23:48:15.861883 containerd[1864]: time="2025-09-04T23:48:15.861701091Z" level=info msg="shim disconnected" id=bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4 namespace=k8s.io Sep 4 23:48:15.861883 containerd[1864]: time="2025-09-04T23:48:15.861779679Z" level=warning msg="cleaning up after shim disconnected" id=bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4 namespace=k8s.io Sep 4 23:48:15.861883 containerd[1864]: time="2025-09-04T23:48:15.861800739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:15.896615 containerd[1864]: time="2025-09-04T23:48:15.895242987Z" level=info msg="shim disconnected" id=9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec namespace=k8s.io Sep 4 23:48:15.896615 containerd[1864]: time="2025-09-04T23:48:15.895367019Z" level=warning msg="cleaning up after shim disconnected" id=9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec namespace=k8s.io Sep 4 23:48:15.896615 containerd[1864]: time="2025-09-04T23:48:15.895389351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:15.902755 containerd[1864]: time="2025-09-04T23:48:15.902590995Z" level=info msg="StopContainer for \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\" returns successfully" Sep 4 23:48:15.904237 containerd[1864]: time="2025-09-04T23:48:15.903952503Z" level=info msg="StopPodSandbox for \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\"" Sep 4 23:48:15.904237 containerd[1864]: time="2025-09-04T23:48:15.904016391Z" level=info msg="Container to stop \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:15.904237 containerd[1864]: time="2025-09-04T23:48:15.904040511Z" level=info msg="Container to stop \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:15.904237 containerd[1864]: time="2025-09-04T23:48:15.904063407Z" level=info msg="Container to stop \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:15.904237 containerd[1864]: time="2025-09-04T23:48:15.904088583Z" level=info msg="Container to stop \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:15.904237 containerd[1864]: time="2025-09-04T23:48:15.904111959Z" level=info msg="Container to stop \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:48:15.920254 systemd[1]: cri-containerd-bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52.scope: Deactivated successfully. Sep 4 23:48:15.947780 containerd[1864]: time="2025-09-04T23:48:15.947711631Z" level=info msg="TearDown network for sandbox \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" successfully" Sep 4 23:48:15.947780 containerd[1864]: time="2025-09-04T23:48:15.947766327Z" level=info msg="StopPodSandbox for \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" returns successfully" Sep 4 23:48:16.009813 containerd[1864]: time="2025-09-04T23:48:16.009713916Z" level=info msg="shim disconnected" id=bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52 namespace=k8s.io Sep 4 23:48:16.009813 containerd[1864]: time="2025-09-04T23:48:16.009795864Z" level=warning msg="cleaning up after shim disconnected" id=bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52 namespace=k8s.io Sep 4 23:48:16.009813 containerd[1864]: time="2025-09-04T23:48:16.009817416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:16.033993 containerd[1864]: time="2025-09-04T23:48:16.033935220Z" level=info msg="TearDown network for sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" successfully" Sep 4 23:48:16.033993 containerd[1864]: time="2025-09-04T23:48:16.033986664Z" level=info msg="StopPodSandbox for \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" returns successfully" Sep 4 23:48:16.099161 kubelet[3129]: I0904 23:48:16.099104 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/332cccdd-0b37-4087-93f8-562afdfefb68-cilium-config-path\") pod \"332cccdd-0b37-4087-93f8-562afdfefb68\" (UID: \"332cccdd-0b37-4087-93f8-562afdfefb68\") " Sep 4 23:48:16.099867 kubelet[3129]: I0904 23:48:16.099197 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5vqh\" (UniqueName: \"kubernetes.io/projected/332cccdd-0b37-4087-93f8-562afdfefb68-kube-api-access-x5vqh\") pod \"332cccdd-0b37-4087-93f8-562afdfefb68\" (UID: \"332cccdd-0b37-4087-93f8-562afdfefb68\") " Sep 4 23:48:16.105101 kubelet[3129]: I0904 23:48:16.104969 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332cccdd-0b37-4087-93f8-562afdfefb68-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "332cccdd-0b37-4087-93f8-562afdfefb68" (UID: "332cccdd-0b37-4087-93f8-562afdfefb68"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 23:48:16.106840 kubelet[3129]: I0904 23:48:16.106747 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332cccdd-0b37-4087-93f8-562afdfefb68-kube-api-access-x5vqh" (OuterVolumeSpecName: "kube-api-access-x5vqh") pod "332cccdd-0b37-4087-93f8-562afdfefb68" (UID: "332cccdd-0b37-4087-93f8-562afdfefb68"). InnerVolumeSpecName "kube-api-access-x5vqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 23:48:16.201040 kubelet[3129]: I0904 23:48:16.200103 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-hubble-tls\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201040 kubelet[3129]: I0904 23:48:16.200164 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-hostproc\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201040 kubelet[3129]: I0904 23:48:16.200199 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cni-path\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201040 kubelet[3129]: I0904 23:48:16.200233 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-lib-modules\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201040 kubelet[3129]: I0904 23:48:16.200266 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-kernel\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201040 kubelet[3129]: I0904 23:48:16.200303 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9f268b8-2829-476e-8608-eafa29be8c59-clustermesh-secrets\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201495 kubelet[3129]: I0904 23:48:16.200336 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-net\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201495 kubelet[3129]: I0904 23:48:16.200369 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-xtables-lock\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201495 kubelet[3129]: I0904 23:48:16.200401 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-run\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201495 kubelet[3129]: I0904 23:48:16.200432 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-cgroup\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201495 kubelet[3129]: I0904 23:48:16.200464 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-etc-cni-netd\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.201495 kubelet[3129]: I0904 23:48:16.200500 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tlwz\" (UniqueName: \"kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-kube-api-access-8tlwz\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.202772 kubelet[3129]: I0904 23:48:16.200569 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-bpf-maps\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.202772 kubelet[3129]: I0904 23:48:16.200608 3129 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-config-path\") pod \"a9f268b8-2829-476e-8608-eafa29be8c59\" (UID: \"a9f268b8-2829-476e-8608-eafa29be8c59\") " Sep 4 23:48:16.202772 kubelet[3129]: I0904 23:48:16.200667 3129 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5vqh\" (UniqueName: \"kubernetes.io/projected/332cccdd-0b37-4087-93f8-562afdfefb68-kube-api-access-x5vqh\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.202772 kubelet[3129]: I0904 23:48:16.200693 3129 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/332cccdd-0b37-4087-93f8-562afdfefb68-cilium-config-path\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.202772 kubelet[3129]: I0904 23:48:16.202346 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.202772 kubelet[3129]: I0904 23:48:16.202422 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.204055 kubelet[3129]: I0904 23:48:16.202463 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.204055 kubelet[3129]: I0904 23:48:16.202873 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.204055 kubelet[3129]: I0904 23:48:16.203468 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.205673 kubelet[3129]: I0904 23:48:16.205599 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-hostproc" (OuterVolumeSpecName: "hostproc") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.205821 kubelet[3129]: I0904 23:48:16.205691 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cni-path" (OuterVolumeSpecName: "cni-path") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.205821 kubelet[3129]: I0904 23:48:16.205729 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.205821 kubelet[3129]: I0904 23:48:16.205767 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.206077 kubelet[3129]: I0904 23:48:16.206048 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 23:48:16.209106 kubelet[3129]: I0904 23:48:16.208968 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f268b8-2829-476e-8608-eafa29be8c59-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 23:48:16.213917 kubelet[3129]: I0904 23:48:16.213712 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 23:48:16.213917 kubelet[3129]: I0904 23:48:16.213869 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-kube-api-access-8tlwz" (OuterVolumeSpecName: "kube-api-access-8tlwz") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "kube-api-access-8tlwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 23:48:16.214793 kubelet[3129]: I0904 23:48:16.214714 3129 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a9f268b8-2829-476e-8608-eafa29be8c59" (UID: "a9f268b8-2829-476e-8608-eafa29be8c59"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 23:48:16.301284 kubelet[3129]: I0904 23:48:16.301214 3129 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cni-path\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301284 kubelet[3129]: I0904 23:48:16.301274 3129 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-lib-modules\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301298 3129 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-hostproc\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301321 3129 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-kernel\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301348 3129 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9f268b8-2829-476e-8608-eafa29be8c59-clustermesh-secrets\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301369 3129 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-host-proc-sys-net\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301390 3129 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-run\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301410 3129 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-cgroup\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301430 3129 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-xtables-lock\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301490 kubelet[3129]: I0904 23:48:16.301449 3129 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-etc-cni-netd\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301981 kubelet[3129]: I0904 23:48:16.301468 3129 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9f268b8-2829-476e-8608-eafa29be8c59-bpf-maps\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301981 kubelet[3129]: I0904 23:48:16.301488 3129 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9f268b8-2829-476e-8608-eafa29be8c59-cilium-config-path\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301981 kubelet[3129]: I0904 23:48:16.301536 3129 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tlwz\" (UniqueName: \"kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-kube-api-access-8tlwz\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.301981 kubelet[3129]: I0904 23:48:16.301562 3129 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9f268b8-2829-476e-8608-eafa29be8c59-hubble-tls\") on node \"ip-172-31-23-55\" DevicePath \"\"" Sep 4 23:48:16.541721 kubelet[3129]: I0904 23:48:16.541678 3129 scope.go:117] "RemoveContainer" containerID="bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4" Sep 4 23:48:16.550027 containerd[1864]: time="2025-09-04T23:48:16.549277826Z" level=info msg="RemoveContainer for \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\"" Sep 4 23:48:16.563543 containerd[1864]: time="2025-09-04T23:48:16.562064270Z" level=info msg="RemoveContainer for \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\" returns successfully" Sep 4 23:48:16.566274 kubelet[3129]: I0904 23:48:16.566227 3129 scope.go:117] "RemoveContainer" containerID="48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917" Sep 4 23:48:16.570647 systemd[1]: Removed slice kubepods-burstable-poda9f268b8_2829_476e_8608_eafa29be8c59.slice - libcontainer container kubepods-burstable-poda9f268b8_2829_476e_8608_eafa29be8c59.slice. Sep 4 23:48:16.570931 containerd[1864]: time="2025-09-04T23:48:16.570869654Z" level=info msg="RemoveContainer for \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\"" Sep 4 23:48:16.571281 systemd[1]: kubepods-burstable-poda9f268b8_2829_476e_8608_eafa29be8c59.slice: Consumed 14.700s CPU time, 127M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:48:16.578062 systemd[1]: Removed slice kubepods-besteffort-pod332cccdd_0b37_4087_93f8_562afdfefb68.slice - libcontainer container kubepods-besteffort-pod332cccdd_0b37_4087_93f8_562afdfefb68.slice. Sep 4 23:48:16.584248 containerd[1864]: time="2025-09-04T23:48:16.584020550Z" level=info msg="RemoveContainer for \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\" returns successfully" Sep 4 23:48:16.586980 kubelet[3129]: I0904 23:48:16.586220 3129 scope.go:117] "RemoveContainer" containerID="da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111" Sep 4 23:48:16.592917 containerd[1864]: time="2025-09-04T23:48:16.592371987Z" level=info msg="RemoveContainer for \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\"" Sep 4 23:48:16.601501 containerd[1864]: time="2025-09-04T23:48:16.601448811Z" level=info msg="RemoveContainer for \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\" returns successfully" Sep 4 23:48:16.602203 kubelet[3129]: I0904 23:48:16.602163 3129 scope.go:117] "RemoveContainer" containerID="0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5" Sep 4 23:48:16.607997 containerd[1864]: time="2025-09-04T23:48:16.607594599Z" level=info msg="RemoveContainer for \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\"" Sep 4 23:48:16.621696 containerd[1864]: time="2025-09-04T23:48:16.621128955Z" level=info msg="RemoveContainer for \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\" returns successfully" Sep 4 23:48:16.623742 kubelet[3129]: I0904 23:48:16.621550 3129 scope.go:117] "RemoveContainer" containerID="caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c" Sep 4 23:48:16.626304 containerd[1864]: time="2025-09-04T23:48:16.625616115Z" level=info msg="RemoveContainer for \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\"" Sep 4 23:48:16.633605 containerd[1864]: time="2025-09-04T23:48:16.633477879Z" level=info msg="RemoveContainer for \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\" returns successfully" Sep 4 23:48:16.634974 kubelet[3129]: I0904 23:48:16.634754 3129 scope.go:117] "RemoveContainer" containerID="bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4" Sep 4 23:48:16.635582 containerd[1864]: time="2025-09-04T23:48:16.635225067Z" level=error msg="ContainerStatus for \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\": not found" Sep 4 23:48:16.636299 kubelet[3129]: E0904 23:48:16.636056 3129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\": not found" containerID="bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4" Sep 4 23:48:16.636755 kubelet[3129]: I0904 23:48:16.636151 3129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4"} err="failed to get container status \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd0b4e2b5a77fc2ba219e3c7622f4e83faa6a55b81cb0ff24dd079be1320a8d4\": not found" Sep 4 23:48:16.636755 kubelet[3129]: I0904 23:48:16.636448 3129 scope.go:117] "RemoveContainer" containerID="48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917" Sep 4 23:48:16.639090 containerd[1864]: time="2025-09-04T23:48:16.638755743Z" level=error msg="ContainerStatus for \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\": not found" Sep 4 23:48:16.640058 kubelet[3129]: E0904 23:48:16.639749 3129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\": not found" containerID="48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917" Sep 4 23:48:16.640058 kubelet[3129]: I0904 23:48:16.639990 3129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917"} err="failed to get container status \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\": rpc error: code = NotFound desc = an error occurred when try to find container \"48a22fb8ed8d4ba48db6d42da0040a320ac3ad775c091f1be25c2bc17d376917\": not found" Sep 4 23:48:16.640364 kubelet[3129]: I0904 23:48:16.640030 3129 scope.go:117] "RemoveContainer" containerID="da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111" Sep 4 23:48:16.641003 containerd[1864]: time="2025-09-04T23:48:16.640949055Z" level=error msg="ContainerStatus for \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\": not found" Sep 4 23:48:16.644531 kubelet[3129]: E0904 23:48:16.642203 3129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\": not found" containerID="da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111" Sep 4 23:48:16.644531 kubelet[3129]: I0904 23:48:16.642262 3129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111"} err="failed to get container status \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\": rpc error: code = NotFound desc = an error occurred when try to find container \"da16c75466bb9e21e82ca85b2d9cb5cca132dc8bde63911c9259650c2072d111\": not found" Sep 4 23:48:16.644531 kubelet[3129]: I0904 23:48:16.642300 3129 scope.go:117] "RemoveContainer" containerID="0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5" Sep 4 23:48:16.644531 kubelet[3129]: E0904 23:48:16.642843 3129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\": not found" containerID="0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5" Sep 4 23:48:16.644531 kubelet[3129]: I0904 23:48:16.642880 3129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5"} err="failed to get container status \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\": not found" Sep 4 23:48:16.644531 kubelet[3129]: I0904 23:48:16.642908 3129 scope.go:117] "RemoveContainer" containerID="caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c" Sep 4 23:48:16.644955 containerd[1864]: time="2025-09-04T23:48:16.642639987Z" level=error msg="ContainerStatus for \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0908aea0c2cdc3ec915188bb5005a1aa86cfdd4dc4a58e0773533dac9428fae5\": not found" Sep 4 23:48:16.646293 containerd[1864]: time="2025-09-04T23:48:16.645594435Z" level=error msg="ContainerStatus for \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\": not found" Sep 4 23:48:16.646673 kubelet[3129]: E0904 23:48:16.645844 3129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\": not found" containerID="caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c" Sep 4 23:48:16.646673 kubelet[3129]: I0904 23:48:16.645892 3129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c"} err="failed to get container status \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\": rpc error: code = NotFound desc = an error occurred when try to find container \"caeb4c72a9d02f611f6f39e9ec48bbb8f4fa55f5489a85beaf8af07ab66ae81c\": not found" Sep 4 23:48:16.646673 kubelet[3129]: I0904 23:48:16.645927 3129 scope.go:117] "RemoveContainer" containerID="99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9" Sep 4 23:48:16.650554 containerd[1864]: time="2025-09-04T23:48:16.650044215Z" level=info msg="RemoveContainer for \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\"" Sep 4 23:48:16.656729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec-rootfs.mount: Deactivated successfully. Sep 4 23:48:16.656926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52-rootfs.mount: Deactivated successfully. Sep 4 23:48:16.657065 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52-shm.mount: Deactivated successfully. Sep 4 23:48:16.657249 systemd[1]: var-lib-kubelet-pods-332cccdd\x2d0b37\x2d4087\x2d93f8\x2d562afdfefb68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5vqh.mount: Deactivated successfully. Sep 4 23:48:16.657408 systemd[1]: var-lib-kubelet-pods-a9f268b8\x2d2829\x2d476e\x2d8608\x2deafa29be8c59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tlwz.mount: Deactivated successfully. Sep 4 23:48:16.657615 systemd[1]: var-lib-kubelet-pods-a9f268b8\x2d2829\x2d476e\x2d8608\x2deafa29be8c59-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:48:16.657754 systemd[1]: var-lib-kubelet-pods-a9f268b8\x2d2829\x2d476e\x2d8608\x2deafa29be8c59-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:48:16.662880 containerd[1864]: time="2025-09-04T23:48:16.662709147Z" level=info msg="RemoveContainer for \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\" returns successfully" Sep 4 23:48:16.664249 kubelet[3129]: I0904 23:48:16.663762 3129 scope.go:117] "RemoveContainer" containerID="99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9" Sep 4 23:48:16.665843 containerd[1864]: time="2025-09-04T23:48:16.664888023Z" level=error msg="ContainerStatus for \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\": not found" Sep 4 23:48:16.666017 kubelet[3129]: E0904 23:48:16.665428 3129 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\": not found" containerID="99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9" Sep 4 23:48:16.666017 kubelet[3129]: I0904 23:48:16.665477 3129 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9"} err="failed to get container status \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"99b770eeaabba095c15d32514e2b5e0e9e3bf8c68b6b2337917374dcdfdd94b9\": not found" Sep 4 23:48:17.557186 sshd[4961]: Connection closed by 139.178.89.65 port 58838 Sep 4 23:48:17.558826 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:17.573089 systemd[1]: sshd@25-172.31.23.55:22-139.178.89.65:58838.service: Deactivated successfully. Sep 4 23:48:17.577456 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:48:17.578039 systemd[1]: session-26.scope: Consumed 4.226s CPU time, 25.7M memory peak. Sep 4 23:48:17.579794 systemd-logind[1847]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:48:17.601988 systemd[1]: Started sshd@26-172.31.23.55:22-139.178.89.65:58854.service - OpenSSH per-connection server daemon (139.178.89.65:58854). Sep 4 23:48:17.603947 systemd-logind[1847]: Removed session 26. Sep 4 23:48:17.789305 sshd[5123]: Accepted publickey for core from 139.178.89.65 port 58854 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:48:17.791841 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:17.799685 systemd-logind[1847]: New session 27 of user core. Sep 4 23:48:17.809798 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:48:17.958905 kubelet[3129]: I0904 23:48:17.958773 3129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332cccdd-0b37-4087-93f8-562afdfefb68" path="/var/lib/kubelet/pods/332cccdd-0b37-4087-93f8-562afdfefb68/volumes" Sep 4 23:48:17.963715 kubelet[3129]: I0904 23:48:17.963067 3129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f268b8-2829-476e-8608-eafa29be8c59" path="/var/lib/kubelet/pods/a9f268b8-2829-476e-8608-eafa29be8c59/volumes" Sep 4 23:48:18.032055 ntpd[1838]: Deleting interface #11 lxc_health, fe80::c815:58ff:fe95:2f2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Sep 4 23:48:18.032969 ntpd[1838]: 4 Sep 23:48:18 ntpd[1838]: Deleting interface #11 lxc_health, fe80::c815:58ff:fe95:2f2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Sep 4 23:48:19.006849 sshd[5126]: Connection closed by 139.178.89.65 port 58854 Sep 4 23:48:19.005832 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:19.016951 systemd[1]: sshd@26-172.31.23.55:22-139.178.89.65:58854.service: Deactivated successfully. Sep 4 23:48:19.024886 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:48:19.029030 systemd-logind[1847]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:48:19.034606 kubelet[3129]: E0904 23:48:19.033498 3129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9f268b8-2829-476e-8608-eafa29be8c59" containerName="mount-cgroup" Sep 4 23:48:19.035307 kubelet[3129]: E0904 23:48:19.034701 3129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="332cccdd-0b37-4087-93f8-562afdfefb68" containerName="cilium-operator" Sep 4 23:48:19.035449 systemd-logind[1847]: Removed session 27. Sep 4 23:48:19.037402 kubelet[3129]: E0904 23:48:19.034727 3129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9f268b8-2829-476e-8608-eafa29be8c59" containerName="cilium-agent" Sep 4 23:48:19.037402 kubelet[3129]: E0904 23:48:19.036189 3129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9f268b8-2829-476e-8608-eafa29be8c59" containerName="apply-sysctl-overwrites" Sep 4 23:48:19.037402 kubelet[3129]: E0904 23:48:19.036576 3129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9f268b8-2829-476e-8608-eafa29be8c59" containerName="mount-bpf-fs" Sep 4 23:48:19.037402 kubelet[3129]: E0904 23:48:19.036607 3129 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a9f268b8-2829-476e-8608-eafa29be8c59" containerName="clean-cilium-state" Sep 4 23:48:19.037402 kubelet[3129]: I0904 23:48:19.036697 3129 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f268b8-2829-476e-8608-eafa29be8c59" containerName="cilium-agent" Sep 4 23:48:19.037402 kubelet[3129]: I0904 23:48:19.036716 3129 memory_manager.go:354] "RemoveStaleState removing state" podUID="332cccdd-0b37-4087-93f8-562afdfefb68" containerName="cilium-operator" Sep 4 23:48:19.074077 systemd[1]: Started sshd@27-172.31.23.55:22-139.178.89.65:58870.service - OpenSSH per-connection server daemon (139.178.89.65:58870). Sep 4 23:48:19.097632 systemd[1]: Created slice kubepods-burstable-podfc81a2dc_d961_4a13_9211_4f77e99e392f.slice - libcontainer container kubepods-burstable-podfc81a2dc_d961_4a13_9211_4f77e99e392f.slice. Sep 4 23:48:19.120872 kubelet[3129]: I0904 23:48:19.120824 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-bpf-maps\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.122757 kubelet[3129]: I0904 23:48:19.121672 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-hostproc\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.122757 kubelet[3129]: I0904 23:48:19.121733 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpvr\" (UniqueName: \"kubernetes.io/projected/fc81a2dc-d961-4a13-9211-4f77e99e392f-kube-api-access-8xpvr\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.122757 kubelet[3129]: I0904 23:48:19.121774 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc81a2dc-d961-4a13-9211-4f77e99e392f-cilium-ipsec-secrets\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.122757 kubelet[3129]: I0904 23:48:19.121813 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-etc-cni-netd\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.122757 kubelet[3129]: I0904 23:48:19.121854 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-host-proc-sys-kernel\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.122757 kubelet[3129]: I0904 23:48:19.121889 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-cilium-run\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123210 kubelet[3129]: I0904 23:48:19.121922 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-cilium-cgroup\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123210 kubelet[3129]: I0904 23:48:19.121959 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc81a2dc-d961-4a13-9211-4f77e99e392f-cilium-config-path\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123210 kubelet[3129]: I0904 23:48:19.121996 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc81a2dc-d961-4a13-9211-4f77e99e392f-hubble-tls\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123210 kubelet[3129]: I0904 23:48:19.122029 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc81a2dc-d961-4a13-9211-4f77e99e392f-clustermesh-secrets\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123210 kubelet[3129]: I0904 23:48:19.122067 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-cni-path\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123210 kubelet[3129]: I0904 23:48:19.122110 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-host-proc-sys-net\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123546 kubelet[3129]: I0904 23:48:19.122145 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-lib-modules\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.123546 kubelet[3129]: I0904 23:48:19.122178 3129 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc81a2dc-d961-4a13-9211-4f77e99e392f-xtables-lock\") pod \"cilium-hq726\" (UID: \"fc81a2dc-d961-4a13-9211-4f77e99e392f\") " pod="kube-system/cilium-hq726" Sep 4 23:48:19.193831 kubelet[3129]: E0904 23:48:19.193739 3129 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:48:19.339684 sshd[5136]: Accepted publickey for core from 139.178.89.65 port 58870 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:48:19.343082 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:19.353907 systemd-logind[1847]: New session 28 of user core. Sep 4 23:48:19.362918 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:48:19.410567 containerd[1864]: time="2025-09-04T23:48:19.410373100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hq726,Uid:fc81a2dc-d961-4a13-9211-4f77e99e392f,Namespace:kube-system,Attempt:0,}" Sep 4 23:48:19.467351 containerd[1864]: time="2025-09-04T23:48:19.467081993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:48:19.467351 containerd[1864]: time="2025-09-04T23:48:19.467238257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:48:19.467351 containerd[1864]: time="2025-09-04T23:48:19.467277713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:19.467781 containerd[1864]: time="2025-09-04T23:48:19.467440061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:48:19.492715 sshd[5142]: Connection closed by 139.178.89.65 port 58870 Sep 4 23:48:19.493609 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:19.499307 systemd[1]: Started cri-containerd-44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63.scope - libcontainer container 44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63. Sep 4 23:48:19.506403 systemd[1]: sshd@27-172.31.23.55:22-139.178.89.65:58870.service: Deactivated successfully. Sep 4 23:48:19.514926 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:48:19.519451 systemd-logind[1847]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:48:19.541039 systemd[1]: Started sshd@28-172.31.23.55:22-139.178.89.65:58882.service - OpenSSH per-connection server daemon (139.178.89.65:58882). Sep 4 23:48:19.545337 systemd-logind[1847]: Removed session 28. Sep 4 23:48:19.589410 containerd[1864]: time="2025-09-04T23:48:19.589187393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hq726,Uid:fc81a2dc-d961-4a13-9211-4f77e99e392f,Namespace:kube-system,Attempt:0,} returns sandbox id \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\"" Sep 4 23:48:19.602196 containerd[1864]: time="2025-09-04T23:48:19.600577721Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:48:19.630503 containerd[1864]: time="2025-09-04T23:48:19.630417882Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5\"" Sep 4 23:48:19.632364 containerd[1864]: time="2025-09-04T23:48:19.631988694Z" level=info msg="StartContainer for \"e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5\"" Sep 4 23:48:19.686852 systemd[1]: Started cri-containerd-e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5.scope - libcontainer container e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5. Sep 4 23:48:19.744892 containerd[1864]: time="2025-09-04T23:48:19.744839694Z" level=info msg="StartContainer for \"e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5\" returns successfully" Sep 4 23:48:19.763348 systemd[1]: cri-containerd-e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5.scope: Deactivated successfully. Sep 4 23:48:19.770692 sshd[5183]: Accepted publickey for core from 139.178.89.65 port 58882 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:48:19.778058 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:48:19.793396 systemd-logind[1847]: New session 29 of user core. Sep 4 23:48:19.802302 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 23:48:19.847978 containerd[1864]: time="2025-09-04T23:48:19.847874155Z" level=info msg="shim disconnected" id=e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5 namespace=k8s.io Sep 4 23:48:19.848303 containerd[1864]: time="2025-09-04T23:48:19.848275687Z" level=warning msg="cleaning up after shim disconnected" id=e3a5030eb669f403466ffa1a04b79b0ce2c44434c711ef489151d593dc4980e5 namespace=k8s.io Sep 4 23:48:19.848422 containerd[1864]: time="2025-09-04T23:48:19.848397223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:20.596353 containerd[1864]: time="2025-09-04T23:48:20.595943778Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:48:20.624829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288925259.mount: Deactivated successfully. Sep 4 23:48:20.630603 containerd[1864]: time="2025-09-04T23:48:20.630502555Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d\"" Sep 4 23:48:20.635591 containerd[1864]: time="2025-09-04T23:48:20.633530359Z" level=info msg="StartContainer for \"150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d\"" Sep 4 23:48:20.703813 systemd[1]: Started cri-containerd-150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d.scope - libcontainer container 150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d. Sep 4 23:48:20.759687 containerd[1864]: time="2025-09-04T23:48:20.759614011Z" level=info msg="StartContainer for \"150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d\" returns successfully" Sep 4 23:48:20.777919 systemd[1]: cri-containerd-150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d.scope: Deactivated successfully. Sep 4 23:48:20.838421 containerd[1864]: time="2025-09-04T23:48:20.838151120Z" level=info msg="shim disconnected" id=150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d namespace=k8s.io Sep 4 23:48:20.838982 containerd[1864]: time="2025-09-04T23:48:20.838389956Z" level=warning msg="cleaning up after shim disconnected" id=150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d namespace=k8s.io Sep 4 23:48:20.838982 containerd[1864]: time="2025-09-04T23:48:20.838744520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:21.237938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-150ab3dd284c55229d5095d44b6eb18852d0c4a93c2a42e98502a5bbdc73704d-rootfs.mount: Deactivated successfully. Sep 4 23:48:21.596331 containerd[1864]: time="2025-09-04T23:48:21.595975399Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:48:21.630621 containerd[1864]: time="2025-09-04T23:48:21.630555932Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac\"" Sep 4 23:48:21.633869 containerd[1864]: time="2025-09-04T23:48:21.633803516Z" level=info msg="StartContainer for \"16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac\"" Sep 4 23:48:21.705820 systemd[1]: Started cri-containerd-16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac.scope - libcontainer container 16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac. Sep 4 23:48:21.767762 containerd[1864]: time="2025-09-04T23:48:21.767683472Z" level=info msg="StartContainer for \"16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac\" returns successfully" Sep 4 23:48:21.772763 systemd[1]: cri-containerd-16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac.scope: Deactivated successfully. Sep 4 23:48:21.830587 containerd[1864]: time="2025-09-04T23:48:21.830210493Z" level=info msg="shim disconnected" id=16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac namespace=k8s.io Sep 4 23:48:21.830587 containerd[1864]: time="2025-09-04T23:48:21.830284821Z" level=warning msg="cleaning up after shim disconnected" id=16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac namespace=k8s.io Sep 4 23:48:21.830587 containerd[1864]: time="2025-09-04T23:48:21.830306397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:22.238135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16ecb0252361c82ccaba385d664b12f1ab8119454e04b292a58d9c8c442d95ac-rootfs.mount: Deactivated successfully. Sep 4 23:48:22.601441 containerd[1864]: time="2025-09-04T23:48:22.601267736Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:48:22.635158 containerd[1864]: time="2025-09-04T23:48:22.634819293Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0\"" Sep 4 23:48:22.637011 containerd[1864]: time="2025-09-04T23:48:22.636739737Z" level=info msg="StartContainer for \"cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0\"" Sep 4 23:48:22.710814 systemd[1]: Started cri-containerd-cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0.scope - libcontainer container cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0. Sep 4 23:48:22.762094 systemd[1]: cri-containerd-cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0.scope: Deactivated successfully. Sep 4 23:48:22.766806 containerd[1864]: time="2025-09-04T23:48:22.766655217Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc81a2dc_d961_4a13_9211_4f77e99e392f.slice/cri-containerd-cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0.scope/memory.events\": no such file or directory" Sep 4 23:48:22.769850 containerd[1864]: time="2025-09-04T23:48:22.769757985Z" level=info msg="StartContainer for \"cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0\" returns successfully" Sep 4 23:48:22.812191 containerd[1864]: time="2025-09-04T23:48:22.812076201Z" level=info msg="shim disconnected" id=cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0 namespace=k8s.io Sep 4 23:48:22.812191 containerd[1864]: time="2025-09-04T23:48:22.812181885Z" level=warning msg="cleaning up after shim disconnected" id=cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0 namespace=k8s.io Sep 4 23:48:22.812585 containerd[1864]: time="2025-09-04T23:48:22.812203173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:23.238107 systemd[1]: run-containerd-runc-k8s.io-cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0-runc.MAQdYn.mount: Deactivated successfully. Sep 4 23:48:23.238302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb322fb9d6f29a22582473734cd901d3c9baf0bb16ca414b892bc022fdc5eee0-rootfs.mount: Deactivated successfully. Sep 4 23:48:23.608423 containerd[1864]: time="2025-09-04T23:48:23.608266617Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:48:23.647253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100489623.mount: Deactivated successfully. Sep 4 23:48:23.654233 containerd[1864]: time="2025-09-04T23:48:23.652343914Z" level=info msg="CreateContainer within sandbox \"44cb4d3ef856ab7b6c657939075fe821e84e4a0a3ac7cc3a23fef537678e4b63\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"beaa2a5af942b60fd5070fabec9f95325c08163f37f1c4dcf3fe1c8d47a351c6\"" Sep 4 23:48:23.656836 containerd[1864]: time="2025-09-04T23:48:23.656776270Z" level=info msg="StartContainer for \"beaa2a5af942b60fd5070fabec9f95325c08163f37f1c4dcf3fe1c8d47a351c6\"" Sep 4 23:48:23.718861 systemd[1]: Started cri-containerd-beaa2a5af942b60fd5070fabec9f95325c08163f37f1c4dcf3fe1c8d47a351c6.scope - libcontainer container beaa2a5af942b60fd5070fabec9f95325c08163f37f1c4dcf3fe1c8d47a351c6. Sep 4 23:48:23.779620 containerd[1864]: time="2025-09-04T23:48:23.779547766Z" level=info msg="StartContainer for \"beaa2a5af942b60fd5070fabec9f95325c08163f37f1c4dcf3fe1c8d47a351c6\" returns successfully" Sep 4 23:48:23.977429 containerd[1864]: time="2025-09-04T23:48:23.977380739Z" level=info msg="StopPodSandbox for \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\"" Sep 4 23:48:23.977925 containerd[1864]: time="2025-09-04T23:48:23.977680151Z" level=info msg="TearDown network for sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" successfully" Sep 4 23:48:23.977925 containerd[1864]: time="2025-09-04T23:48:23.977708327Z" level=info msg="StopPodSandbox for \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" returns successfully" Sep 4 23:48:23.978807 containerd[1864]: time="2025-09-04T23:48:23.978578579Z" level=info msg="RemovePodSandbox for \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\"" Sep 4 23:48:23.978807 containerd[1864]: time="2025-09-04T23:48:23.978629219Z" level=info msg="Forcibly stopping sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\"" Sep 4 23:48:23.978807 containerd[1864]: time="2025-09-04T23:48:23.978725927Z" level=info msg="TearDown network for sandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" successfully" Sep 4 23:48:23.988318 containerd[1864]: time="2025-09-04T23:48:23.987822287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:48:23.988318 containerd[1864]: time="2025-09-04T23:48:23.987918275Z" level=info msg="RemovePodSandbox \"bfe270524622bf5b8dd45eab0048db36d1b325b3e40bcb76699dcdb4d0319a52\" returns successfully" Sep 4 23:48:23.989697 containerd[1864]: time="2025-09-04T23:48:23.988978991Z" level=info msg="StopPodSandbox for \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\"" Sep 4 23:48:23.989697 containerd[1864]: time="2025-09-04T23:48:23.989124191Z" level=info msg="TearDown network for sandbox \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" successfully" Sep 4 23:48:23.989697 containerd[1864]: time="2025-09-04T23:48:23.989147951Z" level=info msg="StopPodSandbox for \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" returns successfully" Sep 4 23:48:23.989697 containerd[1864]: time="2025-09-04T23:48:23.989661623Z" level=info msg="RemovePodSandbox for \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\"" Sep 4 23:48:23.989697 containerd[1864]: time="2025-09-04T23:48:23.989699699Z" level=info msg="Forcibly stopping sandbox \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\"" Sep 4 23:48:23.990031 containerd[1864]: time="2025-09-04T23:48:23.989791151Z" level=info msg="TearDown network for sandbox \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" successfully" Sep 4 23:48:23.999314 containerd[1864]: time="2025-09-04T23:48:23.998943803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:48:23.999314 containerd[1864]: time="2025-09-04T23:48:23.999055835Z" level=info msg="RemovePodSandbox \"9a4711f2b1c3172e31bf2ccaaac33950600df6ce9e4ee0a53643bce7ed315cec\" returns successfully" Sep 4 23:48:24.610569 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 23:48:28.539926 systemd[1]: run-containerd-runc-k8s.io-beaa2a5af942b60fd5070fabec9f95325c08163f37f1c4dcf3fe1c8d47a351c6-runc.zeC0fu.mount: Deactivated successfully. Sep 4 23:48:28.907731 systemd-networkd[1776]: lxc_health: Link UP Sep 4 23:48:28.926218 (udev-worker)[5987]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:48:28.928573 systemd-networkd[1776]: lxc_health: Gained carrier Sep 4 23:48:29.454338 kubelet[3129]: I0904 23:48:29.453914 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hq726" podStartSLOduration=11.45389033 podStartE2EDuration="11.45389033s" podCreationTimestamp="2025-09-04 23:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:48:24.662805959 +0000 UTC m=+120.910635722" watchObservedRunningTime="2025-09-04 23:48:29.45389033 +0000 UTC m=+125.701720081" Sep 4 23:48:30.301293 systemd-networkd[1776]: lxc_health: Gained IPv6LL Sep 4 23:48:33.032160 ntpd[1838]: Listen normally on 14 lxc_health [fe80::9034:d7ff:fe66:116b%14]:123 Sep 4 23:48:33.032762 ntpd[1838]: 4 Sep 23:48:33 ntpd[1838]: Listen normally on 14 lxc_health [fe80::9034:d7ff:fe66:116b%14]:123 Sep 4 23:48:33.078567 systemd[1]: run-containerd-runc-k8s.io-beaa2a5af942b60fd5070fabec9f95325c08163f37f1c4dcf3fe1c8d47a351c6-runc.g2BQfK.mount: Deactivated successfully. Sep 4 23:48:35.600197 sshd[5239]: Connection closed by 139.178.89.65 port 58882 Sep 4 23:48:35.602857 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Sep 4 23:48:35.612015 systemd-logind[1847]: Session 29 logged out. Waiting for processes to exit. Sep 4 23:48:35.615134 systemd[1]: sshd@28-172.31.23.55:22-139.178.89.65:58882.service: Deactivated successfully. Sep 4 23:48:35.622288 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 23:48:35.625335 systemd-logind[1847]: Removed session 29. Sep 4 23:48:50.043303 systemd[1]: cri-containerd-c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144.scope: Deactivated successfully. Sep 4 23:48:50.043913 systemd[1]: cri-containerd-c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144.scope: Consumed 5.291s CPU time, 53.3M memory peak. Sep 4 23:48:50.088602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144-rootfs.mount: Deactivated successfully. Sep 4 23:48:50.098182 containerd[1864]: time="2025-09-04T23:48:50.097852929Z" level=info msg="shim disconnected" id=c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144 namespace=k8s.io Sep 4 23:48:50.098182 containerd[1864]: time="2025-09-04T23:48:50.097933137Z" level=warning msg="cleaning up after shim disconnected" id=c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144 namespace=k8s.io Sep 4 23:48:50.098182 containerd[1864]: time="2025-09-04T23:48:50.097952961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:50.697927 kubelet[3129]: I0904 23:48:50.697379 3129 scope.go:117] "RemoveContainer" containerID="c0a13de979c2b281aa2cce7846c9d84c3eed03ca73d5c61f6cfac2f57dc09144" Sep 4 23:48:50.700407 containerd[1864]: time="2025-09-04T23:48:50.700325148Z" level=info msg="CreateContainer within sandbox \"5bc3b635a57bba17e5b7a6ae4f665a08b67ee823ae705c339a7a54d04f5be0a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 23:48:50.725851 containerd[1864]: time="2025-09-04T23:48:50.725768508Z" level=info msg="CreateContainer within sandbox \"5bc3b635a57bba17e5b7a6ae4f665a08b67ee823ae705c339a7a54d04f5be0a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"eed0771bc23517072bcaefaf56950591ebee8ac4be57dd8f8e5d37e647a9b87b\"" Sep 4 23:48:50.728548 containerd[1864]: time="2025-09-04T23:48:50.726579480Z" level=info msg="StartContainer for \"eed0771bc23517072bcaefaf56950591ebee8ac4be57dd8f8e5d37e647a9b87b\"" Sep 4 23:48:50.785838 systemd[1]: Started cri-containerd-eed0771bc23517072bcaefaf56950591ebee8ac4be57dd8f8e5d37e647a9b87b.scope - libcontainer container eed0771bc23517072bcaefaf56950591ebee8ac4be57dd8f8e5d37e647a9b87b. Sep 4 23:48:50.853120 containerd[1864]: time="2025-09-04T23:48:50.852949993Z" level=info msg="StartContainer for \"eed0771bc23517072bcaefaf56950591ebee8ac4be57dd8f8e5d37e647a9b87b\" returns successfully" Sep 4 23:48:53.868718 systemd[1]: cri-containerd-65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d.scope: Deactivated successfully. Sep 4 23:48:53.870048 systemd[1]: cri-containerd-65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d.scope: Consumed 3.178s CPU time, 20.8M memory peak. Sep 4 23:48:53.913966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d-rootfs.mount: Deactivated successfully. Sep 4 23:48:53.924350 containerd[1864]: time="2025-09-04T23:48:53.924254404Z" level=info msg="shim disconnected" id=65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d namespace=k8s.io Sep 4 23:48:53.924350 containerd[1864]: time="2025-09-04T23:48:53.924329920Z" level=warning msg="cleaning up after shim disconnected" id=65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d namespace=k8s.io Sep 4 23:48:53.924350 containerd[1864]: time="2025-09-04T23:48:53.924353380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:48:54.711844 kubelet[3129]: I0904 23:48:54.711801 3129 scope.go:117] "RemoveContainer" containerID="65f5cfe00a549bd39398e2a15d4f17b1638287b0ae3c7a1eacb70146cb81712d" Sep 4 23:48:54.714901 containerd[1864]: time="2025-09-04T23:48:54.714846244Z" level=info msg="CreateContainer within sandbox \"ab6c2a8065cee30aab150e081be81242403419aea5b6ec0389d6f61607adb78a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 23:48:54.747401 containerd[1864]: time="2025-09-04T23:48:54.747227044Z" level=info msg="CreateContainer within sandbox \"ab6c2a8065cee30aab150e081be81242403419aea5b6ec0389d6f61607adb78a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0ad377d31c33973209eac7ce58038b9088a1f9c104956ab48c6946e48b43ada1\"" Sep 4 23:48:54.748058 containerd[1864]: time="2025-09-04T23:48:54.748005592Z" level=info msg="StartContainer for \"0ad377d31c33973209eac7ce58038b9088a1f9c104956ab48c6946e48b43ada1\"" Sep 4 23:48:54.799839 systemd[1]: Started cri-containerd-0ad377d31c33973209eac7ce58038b9088a1f9c104956ab48c6946e48b43ada1.scope - libcontainer container 0ad377d31c33973209eac7ce58038b9088a1f9c104956ab48c6946e48b43ada1. Sep 4 23:48:54.875857 containerd[1864]: time="2025-09-04T23:48:54.875735105Z" level=info msg="StartContainer for \"0ad377d31c33973209eac7ce58038b9088a1f9c104956ab48c6946e48b43ada1\" returns successfully" Sep 4 23:48:56.908680 kubelet[3129]: E0904 23:48:56.908576 3129 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-55?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 23:49:06.909658 kubelet[3129]: E0904 23:49:06.909561 3129 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-55?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"