Sep 10 23:48:27.164731 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 10 23:48:27.164779 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:48:27.164805 kernel: KASLR disabled due to lack of seed Sep 10 23:48:27.164821 kernel: efi: EFI v2.7 by EDK II Sep 10 23:48:27.164837 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 10 23:48:27.164852 kernel: secureboot: Secure boot disabled Sep 10 23:48:27.164869 kernel: ACPI: Early table checksum verification disabled Sep 10 23:48:27.164884 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 10 23:48:27.164899 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 10 23:48:27.164914 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 10 23:48:27.164929 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 10 23:48:27.164949 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 10 23:48:27.164964 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 10 23:48:27.164979 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 10 23:48:27.164997 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 10 23:48:27.165012 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 10 23:48:27.165032 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 10 23:48:27.165048 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 10 23:48:27.165064 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 10 23:48:27.165080 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 10 23:48:27.165097 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 10 23:48:27.165113 kernel: printk: legacy bootconsole [uart0] enabled Sep 10 23:48:27.165129 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:48:27.165146 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 10 23:48:27.165161 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 10 23:48:27.165177 kernel: Zone ranges: Sep 10 23:48:27.165193 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 10 23:48:27.165213 kernel: DMA32 empty Sep 10 23:48:27.165229 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 10 23:48:27.165245 kernel: Device empty Sep 10 23:48:27.165260 kernel: Movable zone start for each node Sep 10 23:48:27.165275 kernel: Early memory node ranges Sep 10 23:48:27.165291 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 10 23:48:27.165307 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 10 23:48:27.166456 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 10 23:48:27.166511 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 10 23:48:27.166529 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 10 23:48:27.166546 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 10 23:48:27.166564 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 10 23:48:27.166592 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 10 23:48:27.166617 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 10 23:48:27.166634 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 10 23:48:27.166652 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 10 23:48:27.166671 kernel: psci: probing for conduit method from ACPI. Sep 10 23:48:27.166693 kernel: psci: PSCIv1.0 detected in firmware. Sep 10 23:48:27.166710 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:48:27.166727 kernel: psci: Trusted OS migration not required Sep 10 23:48:27.166744 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:48:27.166761 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 10 23:48:27.166777 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:48:27.166795 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:48:27.166812 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 10 23:48:27.166830 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:48:27.166846 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:48:27.166864 kernel: CPU features: detected: Spectre-v2 Sep 10 23:48:27.166887 kernel: CPU features: detected: Spectre-v3a Sep 10 23:48:27.166904 kernel: CPU features: detected: Spectre-BHB Sep 10 23:48:27.166921 kernel: CPU features: detected: ARM erratum 1742098 Sep 10 23:48:27.166938 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 10 23:48:27.166956 kernel: alternatives: applying boot alternatives Sep 10 23:48:27.166977 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:48:27.166996 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:48:27.167015 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:48:27.167032 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:48:27.167050 kernel: Fallback order for Node 0: 0 Sep 10 23:48:27.167074 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 10 23:48:27.167092 kernel: Policy zone: Normal Sep 10 23:48:27.167109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:48:27.167127 kernel: software IO TLB: area num 2. Sep 10 23:48:27.167144 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Sep 10 23:48:27.167162 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 10 23:48:27.167179 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:48:27.167198 kernel: rcu: RCU event tracing is enabled. Sep 10 23:48:27.167216 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 10 23:48:27.167233 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:48:27.167251 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:48:27.167268 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:48:27.167291 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 10 23:48:27.167309 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 10 23:48:27.167374 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 10 23:48:27.167422 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:48:27.167445 kernel: GICv3: 96 SPIs implemented Sep 10 23:48:27.167463 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:48:27.167480 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:48:27.167497 kernel: GICv3: GICv3 features: 16 PPIs Sep 10 23:48:27.167514 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:48:27.167530 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 10 23:48:27.167547 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 10 23:48:27.167565 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:48:27.167593 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:48:27.167610 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 10 23:48:27.167627 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 10 23:48:27.167645 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 10 23:48:27.167663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:48:27.167680 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 10 23:48:27.167698 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 10 23:48:27.167716 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 10 23:48:27.167733 kernel: Console: colour dummy device 80x25 Sep 10 23:48:27.167751 kernel: printk: legacy console [tty1] enabled Sep 10 23:48:27.167769 kernel: ACPI: Core revision 20240827 Sep 10 23:48:27.167797 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 10 23:48:27.167815 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:48:27.167832 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:48:27.167849 kernel: landlock: Up and running. Sep 10 23:48:27.167867 kernel: SELinux: Initializing. Sep 10 23:48:27.167885 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:48:27.167902 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:48:27.167920 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:48:27.167938 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:48:27.167962 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:48:27.167979 kernel: Remapping and enabling EFI services. Sep 10 23:48:27.167996 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:48:27.168012 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:48:27.168030 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 10 23:48:27.168047 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 10 23:48:27.168064 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 10 23:48:27.168082 kernel: smp: Brought up 1 node, 2 CPUs Sep 10 23:48:27.168099 kernel: SMP: Total of 2 processors activated. Sep 10 23:48:27.168130 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:48:27.168149 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:48:27.168170 kernel: CPU features: detected: 32-bit EL1 Support Sep 10 23:48:27.168189 kernel: CPU features: detected: CRC32 instructions Sep 10 23:48:27.168206 kernel: alternatives: applying system-wide alternatives Sep 10 23:48:27.168224 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Sep 10 23:48:27.168243 kernel: devtmpfs: initialized Sep 10 23:48:27.168267 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:48:27.168285 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 10 23:48:27.168303 kernel: 17040 pages in range for non-PLT usage Sep 10 23:48:27.168321 kernel: 508560 pages in range for PLT usage Sep 10 23:48:27.170428 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:48:27.170449 kernel: SMBIOS 3.0.0 present. Sep 10 23:48:27.170467 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 10 23:48:27.170485 kernel: DMI: Memory slots populated: 0/0 Sep 10 23:48:27.170504 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:48:27.170533 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:48:27.170552 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:48:27.170570 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:48:27.170588 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:48:27.170605 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 Sep 10 23:48:27.170623 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:48:27.170641 kernel: cpuidle: using governor menu Sep 10 23:48:27.170659 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:48:27.170676 kernel: ASID allocator initialised with 65536 entries Sep 10 23:48:27.170698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:48:27.170717 kernel: Serial: AMBA PL011 UART driver Sep 10 23:48:27.170734 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:48:27.170752 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:48:27.170769 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:48:27.170787 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:48:27.170804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:48:27.170822 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:48:27.170839 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:48:27.170860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:48:27.170878 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:48:27.170895 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:48:27.170912 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:48:27.170929 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:48:27.170947 kernel: ACPI: Interpreter enabled Sep 10 23:48:27.170964 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:48:27.170982 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:48:27.171000 kernel: ACPI: CPU0 has been hot-added Sep 10 23:48:27.171021 kernel: ACPI: CPU1 has been hot-added Sep 10 23:48:27.171040 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 10 23:48:27.172293 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:48:27.172667 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:48:27.172885 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:48:27.173086 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 10 23:48:27.173286 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 10 23:48:27.173496 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 10 23:48:27.173522 kernel: acpiphp: Slot [1] registered Sep 10 23:48:27.173542 kernel: acpiphp: Slot [2] registered Sep 10 23:48:27.173560 kernel: acpiphp: Slot [3] registered Sep 10 23:48:27.173578 kernel: acpiphp: Slot [4] registered Sep 10 23:48:27.173596 kernel: acpiphp: Slot [5] registered Sep 10 23:48:27.173613 kernel: acpiphp: Slot [6] registered Sep 10 23:48:27.173632 kernel: acpiphp: Slot [7] registered Sep 10 23:48:27.173650 kernel: acpiphp: Slot [8] registered Sep 10 23:48:27.173668 kernel: acpiphp: Slot [9] registered Sep 10 23:48:27.173696 kernel: acpiphp: Slot [10] registered Sep 10 23:48:27.173714 kernel: acpiphp: Slot [11] registered Sep 10 23:48:27.173732 kernel: acpiphp: Slot [12] registered Sep 10 23:48:27.173749 kernel: acpiphp: Slot [13] registered Sep 10 23:48:27.173768 kernel: acpiphp: Slot [14] registered Sep 10 23:48:27.173786 kernel: acpiphp: Slot [15] registered Sep 10 23:48:27.173804 kernel: acpiphp: Slot [16] registered Sep 10 23:48:27.173821 kernel: acpiphp: Slot [17] registered Sep 10 23:48:27.173840 kernel: acpiphp: Slot [18] registered Sep 10 23:48:27.173863 kernel: acpiphp: Slot [19] registered Sep 10 23:48:27.173882 kernel: acpiphp: Slot [20] registered Sep 10 23:48:27.173900 kernel: acpiphp: Slot [21] registered Sep 10 23:48:27.173918 kernel: acpiphp: Slot [22] registered Sep 10 23:48:27.173936 kernel: acpiphp: Slot [23] registered Sep 10 23:48:27.173953 kernel: acpiphp: Slot [24] registered Sep 10 23:48:27.173971 kernel: acpiphp: Slot [25] registered Sep 10 23:48:27.173988 kernel: acpiphp: Slot [26] registered Sep 10 23:48:27.174006 kernel: acpiphp: Slot [27] registered Sep 10 23:48:27.174023 kernel: acpiphp: Slot [28] registered Sep 10 23:48:27.174045 kernel: acpiphp: Slot [29] registered Sep 10 23:48:27.174065 kernel: acpiphp: Slot [30] registered Sep 10 23:48:27.174083 kernel: acpiphp: Slot [31] registered Sep 10 23:48:27.174101 kernel: PCI host bridge to bus 0000:00 Sep 10 23:48:27.174420 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 10 23:48:27.174643 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:48:27.174835 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 10 23:48:27.175037 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 10 23:48:27.175293 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:48:27.175676 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 10 23:48:27.175921 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 10 23:48:27.176159 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 10 23:48:27.176445 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 10 23:48:27.176682 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 10 23:48:27.176961 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 10 23:48:27.177205 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 10 23:48:27.177587 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 10 23:48:27.177835 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 10 23:48:27.178079 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 10 23:48:27.178305 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 10 23:48:27.179742 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 10 23:48:27.179998 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 10 23:48:27.180216 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 10 23:48:27.181861 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 10 23:48:27.182154 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 10 23:48:27.184501 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:48:27.184742 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 10 23:48:27.184780 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:48:27.184800 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:48:27.184820 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:48:27.184838 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:48:27.184856 kernel: iommu: Default domain type: Translated Sep 10 23:48:27.184873 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:48:27.184891 kernel: efivars: Registered efivars operations Sep 10 23:48:27.184910 kernel: vgaarb: loaded Sep 10 23:48:27.184928 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:48:27.184947 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:48:27.184972 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:48:27.184995 kernel: pnp: PnP ACPI init Sep 10 23:48:27.185234 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 10 23:48:27.185265 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:48:27.185283 kernel: NET: Registered PF_INET protocol family Sep 10 23:48:27.185301 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:48:27.185320 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:48:27.185556 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:48:27.185589 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:48:27.185608 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:48:27.185628 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:48:27.185648 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:48:27.185667 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:48:27.185686 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:48:27.185705 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:48:27.185722 kernel: kvm [1]: HYP mode not available Sep 10 23:48:27.185741 kernel: Initialise system trusted keyrings Sep 10 23:48:27.185767 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:48:27.185785 kernel: Key type asymmetric registered Sep 10 23:48:27.185803 kernel: Asymmetric key parser 'x509' registered Sep 10 23:48:27.185821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:48:27.185839 kernel: io scheduler mq-deadline registered Sep 10 23:48:27.185856 kernel: io scheduler kyber registered Sep 10 23:48:27.185874 kernel: io scheduler bfq registered Sep 10 23:48:27.186185 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 10 23:48:27.186232 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:48:27.186251 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:48:27.186269 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 10 23:48:27.186287 kernel: ACPI: button: Sleep Button [SLPB] Sep 10 23:48:27.186305 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:48:27.186359 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 10 23:48:27.187820 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 10 23:48:27.187857 kernel: printk: legacy console [ttyS0] disabled Sep 10 23:48:27.187877 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 10 23:48:27.187910 kernel: printk: legacy console [ttyS0] enabled Sep 10 23:48:27.187929 kernel: printk: legacy bootconsole [uart0] disabled Sep 10 23:48:27.187947 kernel: thunder_xcv, ver 1.0 Sep 10 23:48:27.187966 kernel: thunder_bgx, ver 1.0 Sep 10 23:48:27.187987 kernel: nicpf, ver 1.0 Sep 10 23:48:27.188006 kernel: nicvf, ver 1.0 Sep 10 23:48:27.188274 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:48:27.189696 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:48:26 UTC (1757548106) Sep 10 23:48:27.189796 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:48:27.189843 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 10 23:48:27.189879 kernel: watchdog: NMI not fully supported Sep 10 23:48:27.189900 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:48:27.189921 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:48:27.189940 kernel: Segment Routing with IPv6 Sep 10 23:48:27.189962 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:48:27.189980 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:48:27.190000 kernel: Key type dns_resolver registered Sep 10 23:48:27.190024 kernel: registered taskstats version 1 Sep 10 23:48:27.190044 kernel: Loading compiled-in X.509 certificates Sep 10 23:48:27.190062 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:48:27.190081 kernel: Demotion targets for Node 0: null Sep 10 23:48:27.190099 kernel: Key type .fscrypt registered Sep 10 23:48:27.190117 kernel: Key type fscrypt-provisioning registered Sep 10 23:48:27.190135 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:48:27.190153 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:48:27.190172 kernel: ima: No architecture policies found Sep 10 23:48:27.190196 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:48:27.190215 kernel: clk: Disabling unused clocks Sep 10 23:48:27.190234 kernel: PM: genpd: Disabling unused power domains Sep 10 23:48:27.190253 kernel: Warning: unable to open an initial console. Sep 10 23:48:27.190273 kernel: Freeing unused kernel memory: 38976K Sep 10 23:48:27.190292 kernel: Run /init as init process Sep 10 23:48:27.190310 kernel: with arguments: Sep 10 23:48:27.192421 kernel: /init Sep 10 23:48:27.192455 kernel: with environment: Sep 10 23:48:27.192473 kernel: HOME=/ Sep 10 23:48:27.192507 kernel: TERM=linux Sep 10 23:48:27.192525 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:48:27.192547 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:48:27.192575 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:48:27.192597 systemd[1]: Detected virtualization amazon. Sep 10 23:48:27.192618 systemd[1]: Detected architecture arm64. Sep 10 23:48:27.192637 systemd[1]: Running in initrd. Sep 10 23:48:27.192662 systemd[1]: No hostname configured, using default hostname. Sep 10 23:48:27.192684 systemd[1]: Hostname set to . Sep 10 23:48:27.192703 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:48:27.192722 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:48:27.192742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:48:27.192762 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:48:27.192783 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:48:27.192804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:48:27.192829 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:48:27.192851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:48:27.192873 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:48:27.192896 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:48:27.192918 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:48:27.192940 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:48:27.192959 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:48:27.192984 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:48:27.193004 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:48:27.193024 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:48:27.193043 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:48:27.193100 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:48:27.193126 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:48:27.193147 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:48:27.193167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:48:27.193194 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:48:27.193214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:48:27.193234 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:48:27.193254 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:48:27.193274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:48:27.193293 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:48:27.193313 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:48:27.193375 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:48:27.193398 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:48:27.193426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:48:27.193446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:27.193465 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:48:27.193486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:48:27.193510 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:48:27.193530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:48:27.193615 systemd-journald[258]: Collecting audit messages is disabled. Sep 10 23:48:27.193661 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:48:27.193687 systemd-journald[258]: Journal started Sep 10 23:48:27.193737 systemd-journald[258]: Runtime Journal (/run/log/journal/ec294c7debb3f3d0e138347bc362c48c) is 8M, max 75.3M, 67.3M free. Sep 10 23:48:27.195504 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:48:27.152350 systemd-modules-load[260]: Inserted module 'overlay' Sep 10 23:48:27.206404 kernel: Bridge firewalling registered Sep 10 23:48:27.205473 systemd-modules-load[260]: Inserted module 'br_netfilter' Sep 10 23:48:27.210451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:27.217469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:48:27.220672 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:48:27.230822 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:48:27.245154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:48:27.260460 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:48:27.276729 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:48:27.302488 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:48:27.315277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:48:27.325574 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:48:27.341374 systemd-tmpfiles[285]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:48:27.350448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:48:27.362096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:48:27.374375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:48:27.398907 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:48:27.479015 systemd-resolved[302]: Positive Trust Anchors: Sep 10 23:48:27.479050 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:48:27.479117 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:48:27.581371 kernel: SCSI subsystem initialized Sep 10 23:48:27.589364 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:48:27.603603 kernel: iscsi: registered transport (tcp) Sep 10 23:48:27.626767 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:48:27.626856 kernel: QLogic iSCSI HBA Driver Sep 10 23:48:27.666569 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:48:27.702654 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:48:27.714898 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:48:27.740512 kernel: random: crng init done Sep 10 23:48:27.741072 systemd-resolved[302]: Defaulting to hostname 'linux'. Sep 10 23:48:27.745178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:48:27.752297 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:48:27.827495 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:48:27.832064 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:48:27.936405 kernel: raid6: neonx8 gen() 6322 MB/s Sep 10 23:48:27.953410 kernel: raid6: neonx4 gen() 6312 MB/s Sep 10 23:48:27.971382 kernel: raid6: neonx2 gen() 5274 MB/s Sep 10 23:48:27.988385 kernel: raid6: neonx1 gen() 3906 MB/s Sep 10 23:48:28.006379 kernel: raid6: int64x8 gen() 3577 MB/s Sep 10 23:48:28.024379 kernel: raid6: int64x4 gen() 3662 MB/s Sep 10 23:48:28.041393 kernel: raid6: int64x2 gen() 3534 MB/s Sep 10 23:48:28.059751 kernel: raid6: int64x1 gen() 2743 MB/s Sep 10 23:48:28.059825 kernel: raid6: using algorithm neonx8 gen() 6322 MB/s Sep 10 23:48:28.079078 kernel: raid6: .... xor() 4703 MB/s, rmw enabled Sep 10 23:48:28.079158 kernel: raid6: using neon recovery algorithm Sep 10 23:48:28.087390 kernel: xor: measuring software checksum speed Sep 10 23:48:28.090205 kernel: 8regs : 11096 MB/sec Sep 10 23:48:28.090272 kernel: 32regs : 13060 MB/sec Sep 10 23:48:28.092741 kernel: arm64_neon : 8447 MB/sec Sep 10 23:48:28.092816 kernel: xor: using function: 32regs (13060 MB/sec) Sep 10 23:48:28.189392 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:48:28.203437 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:48:28.212864 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:48:28.275192 systemd-udevd[508]: Using default interface naming scheme 'v255'. Sep 10 23:48:28.288505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:48:28.298367 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:48:28.337655 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Sep 10 23:48:28.388428 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:48:28.396276 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:48:28.537101 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:48:28.547870 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:48:28.704569 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:48:28.712799 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 10 23:48:28.720306 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 10 23:48:28.720947 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 10 23:48:28.737387 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:bb:03:8c:ab:c3 Sep 10 23:48:28.737790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:48:28.740453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:28.753453 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 10 23:48:28.743624 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:28.750949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:28.763645 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 10 23:48:28.760828 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:48:28.768926 (udev-worker)[577]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:48:28.779444 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 10 23:48:28.787724 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:48:28.787792 kernel: GPT:9289727 != 16777215 Sep 10 23:48:28.787829 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:48:28.787854 kernel: GPT:9289727 != 16777215 Sep 10 23:48:28.790246 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:48:28.791448 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 10 23:48:28.816615 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:28.840370 kernel: nvme nvme0: using unchecked data buffer Sep 10 23:48:28.958752 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 10 23:48:29.021276 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:48:29.049304 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 10 23:48:29.075936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 10 23:48:29.099289 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 10 23:48:29.105042 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 10 23:48:29.105396 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:48:29.105585 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:48:29.105967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:48:29.111853 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:48:29.130693 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:48:29.157934 disk-uuid[685]: Primary Header is updated. Sep 10 23:48:29.157934 disk-uuid[685]: Secondary Entries is updated. Sep 10 23:48:29.157934 disk-uuid[685]: Secondary Header is updated. Sep 10 23:48:29.171642 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 10 23:48:29.182055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:48:30.191393 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 10 23:48:30.193370 disk-uuid[686]: The operation has completed successfully. Sep 10 23:48:30.392068 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:48:30.393457 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:48:30.479393 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:48:30.517185 sh[953]: Success Sep 10 23:48:30.549983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:48:30.550113 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:48:30.550145 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:48:30.564957 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:48:30.683933 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:48:30.693847 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:48:30.718470 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:48:30.739480 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (976) Sep 10 23:48:30.744126 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:48:30.744206 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:30.857441 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 10 23:48:30.857528 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:48:30.858775 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:48:30.870717 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:48:30.872034 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:48:30.872848 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:48:30.874188 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:48:30.888583 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:48:30.947364 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1009) Sep 10 23:48:30.952869 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:30.952948 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:30.961435 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 10 23:48:30.961519 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 10 23:48:30.970364 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:30.973443 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:48:30.981015 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:48:31.100459 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:48:31.108955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:48:31.187599 systemd-networkd[1145]: lo: Link UP Sep 10 23:48:31.187616 systemd-networkd[1145]: lo: Gained carrier Sep 10 23:48:31.193082 systemd-networkd[1145]: Enumeration completed Sep 10 23:48:31.194026 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:31.194034 systemd-networkd[1145]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:48:31.195135 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:48:31.203758 systemd[1]: Reached target network.target - Network. Sep 10 23:48:31.212788 systemd-networkd[1145]: eth0: Link UP Sep 10 23:48:31.212795 systemd-networkd[1145]: eth0: Gained carrier Sep 10 23:48:31.212818 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:31.241464 systemd-networkd[1145]: eth0: DHCPv4 address 172.31.30.159/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 10 23:48:31.492450 ignition[1065]: Ignition 2.21.0 Sep 10 23:48:31.492472 ignition[1065]: Stage: fetch-offline Sep 10 23:48:31.493319 ignition[1065]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:31.493883 ignition[1065]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:31.494874 ignition[1065]: Ignition finished successfully Sep 10 23:48:31.505431 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:48:31.510127 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 10 23:48:31.545158 ignition[1157]: Ignition 2.21.0 Sep 10 23:48:31.545408 ignition[1157]: Stage: fetch Sep 10 23:48:31.545945 ignition[1157]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:31.545978 ignition[1157]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:31.546271 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:31.565044 ignition[1157]: PUT result: OK Sep 10 23:48:31.569365 ignition[1157]: parsed url from cmdline: "" Sep 10 23:48:31.569389 ignition[1157]: no config URL provided Sep 10 23:48:31.569405 ignition[1157]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:48:31.569432 ignition[1157]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:48:31.569471 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:31.573893 ignition[1157]: PUT result: OK Sep 10 23:48:31.575912 ignition[1157]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 10 23:48:31.582973 ignition[1157]: GET result: OK Sep 10 23:48:31.583234 ignition[1157]: parsing config with SHA512: f6e94ca97b1374ea4540a684b692f8bad5d987d03c5889a492650ef26b6e029b5258592a481eeb9d95bf0d1e2de1fa3f0d1ae98a882c182ee9fd0d592e4ef4c3 Sep 10 23:48:31.598752 unknown[1157]: fetched base config from "system" Sep 10 23:48:31.599542 ignition[1157]: fetch: fetch complete Sep 10 23:48:31.598775 unknown[1157]: fetched base config from "system" Sep 10 23:48:31.599564 ignition[1157]: fetch: fetch passed Sep 10 23:48:31.598787 unknown[1157]: fetched user config from "aws" Sep 10 23:48:31.599672 ignition[1157]: Ignition finished successfully Sep 10 23:48:31.607868 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 10 23:48:31.618582 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:48:31.681143 ignition[1163]: Ignition 2.21.0 Sep 10 23:48:31.681166 ignition[1163]: Stage: kargs Sep 10 23:48:31.684761 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:31.684803 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:31.687090 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:31.693069 ignition[1163]: PUT result: OK Sep 10 23:48:31.698033 ignition[1163]: kargs: kargs passed Sep 10 23:48:31.698224 ignition[1163]: Ignition finished successfully Sep 10 23:48:31.704090 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:48:31.710292 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:48:31.751513 ignition[1169]: Ignition 2.21.0 Sep 10 23:48:31.751544 ignition[1169]: Stage: disks Sep 10 23:48:31.752193 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:31.752219 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:31.752436 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:31.755247 ignition[1169]: PUT result: OK Sep 10 23:48:31.771266 ignition[1169]: disks: disks passed Sep 10 23:48:31.771738 ignition[1169]: Ignition finished successfully Sep 10 23:48:31.777629 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:48:31.784397 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:48:31.789425 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:48:31.793306 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:48:31.804093 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:48:31.804217 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:48:31.813166 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:48:31.876110 systemd-fsck[1178]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:48:31.884888 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:48:31.896771 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:48:32.051380 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:48:32.052720 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:48:32.057880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:48:32.063632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:48:32.075775 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:48:32.078535 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:48:32.080299 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:48:32.082768 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:48:32.108017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:48:32.114820 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:48:32.133396 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1197) Sep 10 23:48:32.133470 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:32.137609 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:32.146769 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 10 23:48:32.146856 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 10 23:48:32.150188 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:48:32.330541 systemd-networkd[1145]: eth0: Gained IPv6LL Sep 10 23:48:32.497644 initrd-setup-root[1221]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:48:32.534148 initrd-setup-root[1228]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:48:32.546150 initrd-setup-root[1235]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:48:32.558798 initrd-setup-root[1242]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:48:32.973666 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:48:32.979800 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:48:32.986018 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:48:33.014109 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:48:33.019433 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:33.056564 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:48:33.071586 ignition[1310]: INFO : Ignition 2.21.0 Sep 10 23:48:33.073785 ignition[1310]: INFO : Stage: mount Sep 10 23:48:33.076130 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:33.076130 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:33.082250 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:33.086279 ignition[1310]: INFO : PUT result: OK Sep 10 23:48:33.092764 ignition[1310]: INFO : mount: mount passed Sep 10 23:48:33.094699 ignition[1310]: INFO : Ignition finished successfully Sep 10 23:48:33.099791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:48:33.108062 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:48:33.140303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:48:33.181398 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1323) Sep 10 23:48:33.185525 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:48:33.185618 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:48:33.193897 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 10 23:48:33.194000 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 10 23:48:33.198896 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:48:33.248933 ignition[1341]: INFO : Ignition 2.21.0 Sep 10 23:48:33.248933 ignition[1341]: INFO : Stage: files Sep 10 23:48:33.252867 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:33.252867 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:33.258543 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:33.267310 ignition[1341]: INFO : PUT result: OK Sep 10 23:48:33.273237 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:48:33.287125 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:48:33.287125 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:48:33.298579 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:48:33.301894 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:48:33.305787 unknown[1341]: wrote ssh authorized keys file for user: core Sep 10 23:48:33.308574 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:48:33.323411 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:48:33.323411 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 10 23:48:33.398356 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:48:33.718887 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:48:33.718887 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:48:33.718887 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 10 23:48:34.019198 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 23:48:34.464417 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 23:48:34.464417 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:48:34.464417 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:48:34.464417 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:48:34.464417 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:48:34.485924 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:48:34.485924 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:48:34.485924 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:48:34.485924 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:48:34.485924 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:48:34.507902 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:48:34.507902 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:48:34.507902 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:48:34.507902 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:48:34.507902 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 10 23:48:34.997629 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 23:48:37.383083 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:48:37.388444 ignition[1341]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 23:48:37.394408 ignition[1341]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:48:37.401965 ignition[1341]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:48:37.401965 ignition[1341]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 23:48:37.410063 ignition[1341]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:48:37.410063 ignition[1341]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:48:37.410063 ignition[1341]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:48:37.410063 ignition[1341]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:48:37.410063 ignition[1341]: INFO : files: files passed Sep 10 23:48:37.410063 ignition[1341]: INFO : Ignition finished successfully Sep 10 23:48:37.419995 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:48:37.428676 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:48:37.439836 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:48:37.465456 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:48:37.467974 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:48:37.486632 initrd-setup-root-after-ignition[1369]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:48:37.490362 initrd-setup-root-after-ignition[1369]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:48:37.493841 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:48:37.500106 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:48:37.506376 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:48:37.510497 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:48:37.599038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:48:37.599540 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:48:37.605257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:48:37.609841 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:48:37.614360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:48:37.615751 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:48:37.659023 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:48:37.665501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:48:37.702932 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:48:37.704150 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:48:37.704469 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:48:37.704762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:48:37.704986 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:48:37.705847 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:48:37.706214 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:48:37.706925 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:48:37.707297 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:48:37.712486 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:48:37.712967 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:48:37.713368 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:48:37.713690 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:48:37.714067 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:48:37.714444 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:48:37.714773 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:48:37.715078 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:48:37.715291 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:48:37.720002 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:48:37.720382 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:48:37.720637 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:48:37.747858 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:48:37.790785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:48:37.791046 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:48:37.798485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:48:37.798986 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:48:37.806885 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:48:37.807365 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:48:37.815096 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:48:37.817591 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:48:37.819597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:48:37.858547 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:48:37.862722 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:48:37.866734 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:48:37.874706 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:48:37.877694 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:48:37.899634 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:48:37.902188 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:48:37.914443 ignition[1393]: INFO : Ignition 2.21.0 Sep 10 23:48:37.916654 ignition[1393]: INFO : Stage: umount Sep 10 23:48:37.919208 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:48:37.919208 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 10 23:48:37.919208 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 10 23:48:37.928148 ignition[1393]: INFO : PUT result: OK Sep 10 23:48:37.937404 ignition[1393]: INFO : umount: umount passed Sep 10 23:48:37.937404 ignition[1393]: INFO : Ignition finished successfully Sep 10 23:48:37.935640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:48:37.942210 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:48:37.943079 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:48:37.953581 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:48:37.955404 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:48:37.959151 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:48:37.959939 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:48:37.963059 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:48:37.963181 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:48:37.967174 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 10 23:48:37.967287 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 10 23:48:37.973532 systemd[1]: Stopped target network.target - Network. Sep 10 23:48:37.976059 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:48:37.976192 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:48:37.984045 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:48:37.990437 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:48:37.990579 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:48:37.997802 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:48:38.002911 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:48:38.004004 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:48:38.004094 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:48:38.004601 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:48:38.004689 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:48:38.018639 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:48:38.018766 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:48:38.022164 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:48:38.022270 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:48:38.028206 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:48:38.028371 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:48:38.031962 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:48:38.038140 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:48:38.060209 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:48:38.060662 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:48:38.082577 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:48:38.084776 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:48:38.085110 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:48:38.093033 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:48:38.094060 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:48:38.098824 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:48:38.098982 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:48:38.112105 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:48:38.119523 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:48:38.119661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:48:38.125477 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:48:38.125588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:48:38.134087 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:48:38.134179 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:48:38.137278 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:48:38.137713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:48:38.152519 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:48:38.162018 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:48:38.162153 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:48:38.191062 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:48:38.191379 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:48:38.194939 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:48:38.195054 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:48:38.199518 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:48:38.199590 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:48:38.204564 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:48:38.204748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:48:38.215318 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:48:38.215468 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:48:38.222753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:48:38.222850 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:48:38.231792 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:48:38.244172 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:48:38.244297 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:48:38.250722 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:48:38.250815 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:48:38.261704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:48:38.261791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:38.270109 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 23:48:38.270230 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 23:48:38.270317 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:48:38.271095 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:48:38.271454 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:48:38.290344 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:48:38.291022 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:48:38.299234 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:48:38.306602 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:48:38.345251 systemd[1]: Switching root. Sep 10 23:48:38.409601 systemd-journald[258]: Journal stopped Sep 10 23:48:40.947560 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Sep 10 23:48:40.947690 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:48:40.947740 kernel: SELinux: policy capability open_perms=1 Sep 10 23:48:40.947770 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:48:40.947799 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:48:40.947836 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:48:40.947867 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:48:40.947897 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:48:40.947924 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:48:40.947954 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:48:40.947982 kernel: audit: type=1403 audit(1757548118.922:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:48:40.948027 systemd[1]: Successfully loaded SELinux policy in 73.502ms. Sep 10 23:48:40.948071 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 28.026ms. Sep 10 23:48:40.948107 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:48:40.948139 systemd[1]: Detected virtualization amazon. Sep 10 23:48:40.948172 systemd[1]: Detected architecture arm64. Sep 10 23:48:40.948202 systemd[1]: Detected first boot. Sep 10 23:48:40.948233 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:48:40.948264 zram_generator::config[1437]: No configuration found. Sep 10 23:48:40.948296 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:48:40.954565 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:48:40.954643 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:48:40.954676 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:48:40.954713 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:48:40.954747 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:48:40.954782 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:48:40.954816 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:48:40.954850 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:48:40.954890 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:48:40.954924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:48:40.954958 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:48:40.955003 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:48:40.955037 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:48:40.955068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:48:40.955100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:48:40.955132 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:48:40.955165 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:48:40.955203 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:48:40.955234 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:48:40.955263 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 23:48:40.955293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:48:40.957245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:48:40.957300 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:48:40.957365 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:48:40.957411 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:48:40.957442 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:48:40.957471 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:48:40.957504 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:48:40.957534 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:48:40.957566 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:48:40.957598 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:48:40.957629 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:48:40.957663 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:48:40.957700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:48:40.957737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:48:40.957770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:48:40.957804 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:48:40.957838 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:48:40.957872 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:48:40.957902 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:48:40.957931 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:48:40.957960 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:48:40.957994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:48:40.958035 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:48:40.958071 systemd[1]: Reached target machines.target - Containers. Sep 10 23:48:40.958103 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:48:40.958135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:48:40.958165 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:48:40.958195 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:48:40.958227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:48:40.958263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:48:40.958299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:48:40.958365 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:48:40.958402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:48:40.958431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:48:40.958464 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:48:40.958495 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:48:40.958525 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:48:40.958555 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:48:40.958594 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:48:40.958628 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:48:40.958658 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:48:40.958686 kernel: loop: module loaded Sep 10 23:48:40.958727 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:48:40.958756 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:48:40.958785 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:48:40.958812 kernel: fuse: init (API version 7.41) Sep 10 23:48:40.958842 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:48:40.958874 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:48:40.958909 systemd[1]: Stopped verity-setup.service. Sep 10 23:48:40.958948 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:48:40.958980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:48:40.959016 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:48:40.959047 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:48:40.959077 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:48:40.959107 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:48:40.959137 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:48:40.959165 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:48:40.959198 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:48:40.959232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:48:40.959265 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:48:40.959292 kernel: ACPI: bus type drm_connector registered Sep 10 23:48:40.969563 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:48:40.969617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:48:40.969648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:48:40.969739 systemd-journald[1516]: Collecting audit messages is disabled. Sep 10 23:48:40.969806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:48:40.969838 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:48:40.969868 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:48:40.969899 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:48:40.969928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:48:40.969960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:48:40.969993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:48:40.970025 systemd-journald[1516]: Journal started Sep 10 23:48:40.970078 systemd-journald[1516]: Runtime Journal (/run/log/journal/ec294c7debb3f3d0e138347bc362c48c) is 8M, max 75.3M, 67.3M free. Sep 10 23:48:40.299303 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:48:40.322444 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 10 23:48:40.323258 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:48:40.984026 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:48:40.990445 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:48:40.998418 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:48:41.013418 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:48:41.034910 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:48:41.041702 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:48:41.053500 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:48:41.055977 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:48:41.056040 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:48:41.062510 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:48:41.073538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:48:41.078779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:48:41.085313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:48:41.098278 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:48:41.104095 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:48:41.113716 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:48:41.117872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:48:41.124190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:48:41.135831 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:48:41.150697 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:48:41.160947 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:48:41.166223 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:48:41.188957 systemd-journald[1516]: Time spent on flushing to /var/log/journal/ec294c7debb3f3d0e138347bc362c48c is 118.191ms for 935 entries. Sep 10 23:48:41.188957 systemd-journald[1516]: System Journal (/var/log/journal/ec294c7debb3f3d0e138347bc362c48c) is 8M, max 195.6M, 187.6M free. Sep 10 23:48:41.347629 systemd-journald[1516]: Received client request to flush runtime journal. Sep 10 23:48:41.347749 kernel: loop0: detected capacity change from 0 to 211168 Sep 10 23:48:41.347798 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:48:41.206567 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:48:41.212726 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:48:41.229725 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:48:41.285461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:48:41.319173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:48:41.354817 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:48:41.365616 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:48:41.371405 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:48:41.387441 kernel: loop1: detected capacity change from 0 to 61240 Sep 10 23:48:41.376891 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:48:41.392581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:48:41.453850 kernel: loop2: detected capacity change from 0 to 107312 Sep 10 23:48:41.482115 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Sep 10 23:48:41.482147 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Sep 10 23:48:41.493229 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:48:41.576369 kernel: loop3: detected capacity change from 0 to 138376 Sep 10 23:48:41.703362 kernel: loop4: detected capacity change from 0 to 211168 Sep 10 23:48:41.741357 kernel: loop5: detected capacity change from 0 to 61240 Sep 10 23:48:41.764256 kernel: loop6: detected capacity change from 0 to 107312 Sep 10 23:48:41.779371 kernel: loop7: detected capacity change from 0 to 138376 Sep 10 23:48:41.794737 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 10 23:48:41.795751 (sd-merge)[1598]: Merged extensions into '/usr'. Sep 10 23:48:41.808062 systemd[1]: Reload requested from client PID 1571 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:48:41.808095 systemd[1]: Reloading... Sep 10 23:48:41.991386 zram_generator::config[1624]: No configuration found. Sep 10 23:48:42.217641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:48:42.425125 systemd[1]: Reloading finished in 616 ms. Sep 10 23:48:42.449107 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:48:42.467885 systemd[1]: Starting ensure-sysext.service... Sep 10 23:48:42.474200 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:48:42.511450 systemd[1]: Reload requested from client PID 1675 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:48:42.511483 systemd[1]: Reloading... Sep 10 23:48:42.598286 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:48:42.598418 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:48:42.599100 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:48:42.599805 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:48:42.606766 systemd-tmpfiles[1676]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:48:42.609452 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Sep 10 23:48:42.609638 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Sep 10 23:48:42.633897 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:48:42.633929 systemd-tmpfiles[1676]: Skipping /boot Sep 10 23:48:42.684397 ldconfig[1566]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:48:42.691264 systemd-tmpfiles[1676]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:48:42.693544 systemd-tmpfiles[1676]: Skipping /boot Sep 10 23:48:42.716365 zram_generator::config[1710]: No configuration found. Sep 10 23:48:42.920313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:48:43.104955 systemd[1]: Reloading finished in 592 ms. Sep 10 23:48:43.120610 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:48:43.123893 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:48:43.143620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:48:43.161762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:48:43.170748 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:48:43.176914 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:48:43.189053 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:48:43.199921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:48:43.205987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:48:43.214430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:48:43.221058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:48:43.232155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:48:43.237474 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:48:43.239895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:48:43.240133 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:48:43.247321 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:48:43.255003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:48:43.255413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:48:43.255625 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:48:43.265364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:48:43.285067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:48:43.287698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:48:43.287992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:48:43.288431 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:48:43.305159 systemd[1]: Finished ensure-sysext.service. Sep 10 23:48:43.352476 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:48:43.380916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:48:43.383465 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:48:43.387199 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:48:43.388803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:48:43.391988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:48:43.392444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:48:43.404111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:48:43.408568 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:48:43.416215 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:48:43.417617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:48:43.421857 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:48:43.421943 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:48:43.426020 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:48:43.435720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:48:43.458367 systemd-udevd[1763]: Using default interface naming scheme 'v255'. Sep 10 23:48:43.490485 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:48:43.497391 augenrules[1799]: No rules Sep 10 23:48:43.499692 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:48:43.500216 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:48:43.528537 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:48:43.542532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:48:43.562908 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:48:43.840654 systemd-networkd[1819]: lo: Link UP Sep 10 23:48:43.840672 systemd-networkd[1819]: lo: Gained carrier Sep 10 23:48:43.845494 systemd-resolved[1762]: Positive Trust Anchors: Sep 10 23:48:43.845532 systemd-resolved[1762]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:48:43.845600 systemd-resolved[1762]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:48:43.856250 systemd-networkd[1819]: Enumeration completed Sep 10 23:48:43.856448 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:48:43.864066 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:48:43.869752 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:48:43.883273 systemd-resolved[1762]: Defaulting to hostname 'linux'. Sep 10 23:48:43.892083 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:48:43.894705 systemd[1]: Reached target network.target - Network. Sep 10 23:48:43.896702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:48:43.899378 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:48:43.902600 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:48:43.906027 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:48:43.909697 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:48:43.912456 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:48:43.915296 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:48:43.918483 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:48:43.918534 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:48:43.920641 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:48:43.924150 (udev-worker)[1827]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:48:43.924958 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:48:43.936979 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:48:43.948682 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:48:43.951941 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:48:43.955514 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:48:43.969471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:48:43.972971 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:48:43.979393 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:48:43.983700 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:48:43.993147 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:48:43.995784 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:48:43.998150 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:48:43.998222 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:48:44.001928 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:48:44.009674 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 10 23:48:44.015590 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:48:44.022917 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:48:44.031346 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:48:44.044680 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:48:44.048439 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:48:44.078933 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:48:44.084289 systemd[1]: Started ntpd.service - Network Time Service. Sep 10 23:48:44.090216 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:48:44.098209 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 10 23:48:44.104744 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:48:44.112753 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:48:44.125598 jq[1852]: false Sep 10 23:48:44.131000 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:48:44.136154 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:48:44.138724 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:48:44.147267 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:48:44.159770 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:48:44.189523 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:48:44.194080 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:48:44.195483 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:48:44.265153 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 23:48:44.289943 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:48:44.290706 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:48:44.311630 jq[1868]: true Sep 10 23:48:44.336991 (ntainerd)[1899]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:48:44.398230 update_engine[1867]: I20250910 23:48:44.394865 1867 main.cc:92] Flatcar Update Engine starting Sep 10 23:48:44.404415 dbus-daemon[1850]: [system] SELinux support is enabled Sep 10 23:48:44.404708 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:48:44.427368 tar[1883]: linux-arm64/LICENSE Sep 10 23:48:44.417674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:48:44.417738 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:48:44.420831 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:48:44.420879 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:48:44.433575 jq[1897]: true Sep 10 23:48:44.439386 tar[1883]: linux-arm64/helm Sep 10 23:48:44.449908 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 10 23:48:44.473212 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:48:44.479389 update_engine[1867]: I20250910 23:48:44.477036 1867 update_check_scheduler.cc:74] Next update check in 2m51s Sep 10 23:48:44.486421 extend-filesystems[1853]: Found /dev/nvme0n1p6 Sep 10 23:48:44.511911 extend-filesystems[1853]: Found /dev/nvme0n1p9 Sep 10 23:48:44.533383 extend-filesystems[1853]: Checking size of /dev/nvme0n1p9 Sep 10 23:48:44.571886 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:48:44.575265 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:48:44.575812 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:48:44.607517 systemd-networkd[1819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:44.607542 systemd-networkd[1819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:48:44.625829 coreos-metadata[1849]: Sep 10 23:48:44.622 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 10 23:48:44.637782 systemd-networkd[1819]: eth0: Link UP Sep 10 23:48:44.638276 systemd-networkd[1819]: eth0: Gained carrier Sep 10 23:48:44.638586 bash[1929]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:48:44.638313 systemd-networkd[1819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:48:44.644290 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:48:44.653709 systemd[1]: Starting sshkeys.service... Sep 10 23:48:44.656006 extend-filesystems[1853]: Resized partition /dev/nvme0n1p9 Sep 10 23:48:44.670776 extend-filesystems[1934]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:48:44.699351 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 10 23:48:44.701689 dbus-daemon[1850]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1819 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 10 23:48:44.701900 systemd-networkd[1819]: eth0: DHCPv4 address 172.31.30.159/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 10 23:48:44.719451 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 10 23:48:44.754607 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 10 23:48:44.765905 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 10 23:48:44.810283 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 10 23:48:44.847365 extend-filesystems[1934]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 10 23:48:44.847365 extend-filesystems[1934]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:48:44.847365 extend-filesystems[1934]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 10 23:48:44.859593 extend-filesystems[1853]: Resized filesystem in /dev/nvme0n1p9 Sep 10 23:48:44.855642 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:48:44.866681 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:48:44.887527 ntpd[1857]: ntpd 4.2.8p17@1.4004-o Wed Sep 10 21:39:18 UTC 2025 (1): Starting Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: ntpd 4.2.8p17@1.4004-o Wed Sep 10 21:39:18 UTC 2025 (1): Starting Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: ---------------------------------------------------- Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: ntp-4 is maintained by Network Time Foundation, Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: corporation. Support and training for ntp-4 are Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: available at https://www.nwtime.org/support Sep 10 23:48:44.889835 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: ---------------------------------------------------- Sep 10 23:48:44.887578 ntpd[1857]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 10 23:48:44.887595 ntpd[1857]: ---------------------------------------------------- Sep 10 23:48:44.887612 ntpd[1857]: ntp-4 is maintained by Network Time Foundation, Sep 10 23:48:44.887627 ntpd[1857]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 10 23:48:44.887642 ntpd[1857]: corporation. Support and training for ntp-4 are Sep 10 23:48:44.887659 ntpd[1857]: available at https://www.nwtime.org/support Sep 10 23:48:44.887674 ntpd[1857]: ---------------------------------------------------- Sep 10 23:48:44.895792 ntpd[1857]: proto: precision = 0.096 usec (-23) Sep 10 23:48:44.896856 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: proto: precision = 0.096 usec (-23) Sep 10 23:48:44.899236 ntpd[1857]: basedate set to 2025-08-29 Sep 10 23:48:44.899298 ntpd[1857]: gps base set to 2025-08-31 (week 2382) Sep 10 23:48:44.899526 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: basedate set to 2025-08-29 Sep 10 23:48:44.899526 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: gps base set to 2025-08-31 (week 2382) Sep 10 23:48:44.918930 ntpd[1857]: Listen and drop on 0 v6wildcard [::]:123 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Listen and drop on 0 v6wildcard [::]:123 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Listen normally on 2 lo 127.0.0.1:123 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Listen normally on 3 eth0 172.31.30.159:123 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Listen normally on 4 lo [::1]:123 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: bind(21) AF_INET6 fe80::4bb:3ff:fe8c:abc3%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: unable to create socket on eth0 (5) for fe80::4bb:3ff:fe8c:abc3%2#123 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: failed to init interface for address fe80::4bb:3ff:fe8c:abc3%2 Sep 10 23:48:44.922713 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: Listening on routing socket on fd #21 for interface updates Sep 10 23:48:44.919034 ntpd[1857]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 10 23:48:44.919339 ntpd[1857]: Listen normally on 2 lo 127.0.0.1:123 Sep 10 23:48:44.919406 ntpd[1857]: Listen normally on 3 eth0 172.31.30.159:123 Sep 10 23:48:44.919468 ntpd[1857]: Listen normally on 4 lo [::1]:123 Sep 10 23:48:44.919538 ntpd[1857]: bind(21) AF_INET6 fe80::4bb:3ff:fe8c:abc3%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:44.919575 ntpd[1857]: unable to create socket on eth0 (5) for fe80::4bb:3ff:fe8c:abc3%2#123 Sep 10 23:48:44.919600 ntpd[1857]: failed to init interface for address fe80::4bb:3ff:fe8c:abc3%2 Sep 10 23:48:44.919649 ntpd[1857]: Listening on routing socket on fd #21 for interface updates Sep 10 23:48:44.933545 systemd-logind[1863]: New seat seat0. Sep 10 23:48:44.936481 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:48:44.977370 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:44.982505 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:44.982505 ntpd[1857]: 10 Sep 23:48:44 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:44.977446 ntpd[1857]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 10 23:48:45.232390 containerd[1899]: time="2025-09-10T23:48:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:48:45.242372 containerd[1899]: time="2025-09-10T23:48:45.241144165Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:48:45.289792 coreos-metadata[1936]: Sep 10 23:48:45.289 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 10 23:48:45.290073 locksmithd[1910]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:48:45.295677 coreos-metadata[1936]: Sep 10 23:48:45.294 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 10 23:48:45.295677 coreos-metadata[1936]: Sep 10 23:48:45.295 INFO Fetch successful Sep 10 23:48:45.295677 coreos-metadata[1936]: Sep 10 23:48:45.295 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 10 23:48:45.300413 coreos-metadata[1936]: Sep 10 23:48:45.298 INFO Fetch successful Sep 10 23:48:45.304979 unknown[1936]: wrote ssh authorized keys file for user: core Sep 10 23:48:45.348629 containerd[1899]: time="2025-09-10T23:48:45.348571165Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.188µs" Sep 10 23:48:45.348785 containerd[1899]: time="2025-09-10T23:48:45.348755845Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:48:45.348892 containerd[1899]: time="2025-09-10T23:48:45.348864937Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:48:45.349297 containerd[1899]: time="2025-09-10T23:48:45.349261429Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.353427745Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.353523097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.353691649Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.353718733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.354118081Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.354157993Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.354185641Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:48:45.354371 containerd[1899]: time="2025-09-10T23:48:45.354209377Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:48:45.354874 containerd[1899]: time="2025-09-10T23:48:45.354841969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:48:45.358690 containerd[1899]: time="2025-09-10T23:48:45.358212697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:48:45.358690 containerd[1899]: time="2025-09-10T23:48:45.358310989Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:48:45.358690 containerd[1899]: time="2025-09-10T23:48:45.358391101Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:48:45.358690 containerd[1899]: time="2025-09-10T23:48:45.358470469Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:48:45.360876 containerd[1899]: time="2025-09-10T23:48:45.360742417Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:48:45.361073 containerd[1899]: time="2025-09-10T23:48:45.360953029Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.379898041Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.380017429Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.380052397Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.380308393Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.380390005Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.380419261Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.380446813Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:48:45.380478 containerd[1899]: time="2025-09-10T23:48:45.380475613Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380505589Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380534269Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380560369Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380589385Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380817229Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380858509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380910061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380941849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380968045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.380994133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.381023857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.381051073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.381079441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.381113905Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:48:45.382099 containerd[1899]: time="2025-09-10T23:48:45.381140725Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:48:45.388952 containerd[1899]: time="2025-09-10T23:48:45.388783549Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:48:45.388952 containerd[1899]: time="2025-09-10T23:48:45.388858633Z" level=info msg="Start snapshots syncer" Sep 10 23:48:45.388952 containerd[1899]: time="2025-09-10T23:48:45.388926829Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:48:45.392033 update-ssh-keys[2008]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:48:45.396386 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 10 23:48:45.404361 containerd[1899]: time="2025-09-10T23:48:45.401372773Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:48:45.408078 containerd[1899]: time="2025-09-10T23:48:45.406068097Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:48:45.408078 containerd[1899]: time="2025-09-10T23:48:45.406373305Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:48:45.406398 systemd[1]: Finished sshkeys.service. Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413124961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413232769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413300317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413364589Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413408269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413474269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413540437Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413653141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413719837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:48:45.414259 containerd[1899]: time="2025-09-10T23:48:45.413757877Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:48:45.414953 containerd[1899]: time="2025-09-10T23:48:45.414835573Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:48:45.415040 containerd[1899]: time="2025-09-10T23:48:45.414919645Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417472621Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417540985Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417570061Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417609745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417654241Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417825985Z" level=info msg="runtime interface created" Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417852337Z" level=info msg="created NRI interface" Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417875809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417916417Z" level=info msg="Connect containerd service" Sep 10 23:48:45.418373 containerd[1899]: time="2025-09-10T23:48:45.417995173Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:48:45.428879 containerd[1899]: time="2025-09-10T23:48:45.428407646Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:48:45.453689 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 10 23:48:45.463355 dbus-daemon[1850]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 10 23:48:45.468661 dbus-daemon[1850]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1935 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 10 23:48:45.479270 systemd[1]: Starting polkit.service - Authorization Manager... Sep 10 23:48:45.587696 sshd_keygen[1902]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:48:45.693033 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:48:45.732380 coreos-metadata[1849]: Sep 10 23:48:45.730 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Sep 10 23:48:45.732380 coreos-metadata[1849]: Sep 10 23:48:45.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 10 23:48:45.733068 coreos-metadata[1849]: Sep 10 23:48:45.732 INFO Fetch successful Sep 10 23:48:45.733068 coreos-metadata[1849]: Sep 10 23:48:45.732 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 10 23:48:45.737382 coreos-metadata[1849]: Sep 10 23:48:45.734 INFO Fetch successful Sep 10 23:48:45.737382 coreos-metadata[1849]: Sep 10 23:48:45.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 10 23:48:45.737382 coreos-metadata[1849]: Sep 10 23:48:45.735 INFO Fetch successful Sep 10 23:48:45.737382 coreos-metadata[1849]: Sep 10 23:48:45.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 10 23:48:45.737382 coreos-metadata[1849]: Sep 10 23:48:45.736 INFO Fetch successful Sep 10 23:48:45.737382 coreos-metadata[1849]: Sep 10 23:48:45.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 10 23:48:45.738665 coreos-metadata[1849]: Sep 10 23:48:45.738 INFO Fetch failed with 404: resource not found Sep 10 23:48:45.738665 coreos-metadata[1849]: Sep 10 23:48:45.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 10 23:48:45.741791 coreos-metadata[1849]: Sep 10 23:48:45.741 INFO Fetch successful Sep 10 23:48:45.744139 coreos-metadata[1849]: Sep 10 23:48:45.741 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 10 23:48:45.744139 coreos-metadata[1849]: Sep 10 23:48:45.743 INFO Fetch successful Sep 10 23:48:45.744139 coreos-metadata[1849]: Sep 10 23:48:45.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 10 23:48:45.744481 coreos-metadata[1849]: Sep 10 23:48:45.744 INFO Fetch successful Sep 10 23:48:45.746367 coreos-metadata[1849]: Sep 10 23:48:45.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 10 23:48:45.748210 coreos-metadata[1849]: Sep 10 23:48:45.747 INFO Fetch successful Sep 10 23:48:45.748210 coreos-metadata[1849]: Sep 10 23:48:45.748 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 10 23:48:45.755357 coreos-metadata[1849]: Sep 10 23:48:45.752 INFO Fetch successful Sep 10 23:48:45.853855 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:48:45.864047 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:48:45.872661 systemd[1]: Started sshd@0-172.31.30.159:22-139.178.68.195:59188.service - OpenSSH per-connection server daemon (139.178.68.195:59188). Sep 10 23:48:45.888750 ntpd[1857]: bind(24) AF_INET6 fe80::4bb:3ff:fe8c:abc3%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:45.889453 ntpd[1857]: 10 Sep 23:48:45 ntpd[1857]: bind(24) AF_INET6 fe80::4bb:3ff:fe8c:abc3%2#123 flags 0x11 failed: Cannot assign requested address Sep 10 23:48:45.889453 ntpd[1857]: 10 Sep 23:48:45 ntpd[1857]: unable to create socket on eth0 (6) for fe80::4bb:3ff:fe8c:abc3%2#123 Sep 10 23:48:45.889453 ntpd[1857]: 10 Sep 23:48:45 ntpd[1857]: failed to init interface for address fe80::4bb:3ff:fe8c:abc3%2 Sep 10 23:48:45.888805 ntpd[1857]: unable to create socket on eth0 (6) for fe80::4bb:3ff:fe8c:abc3%2#123 Sep 10 23:48:45.888830 ntpd[1857]: failed to init interface for address fe80::4bb:3ff:fe8c:abc3%2 Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.919935328Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920395372Z" level=info msg="Start subscribing containerd event" Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920575696Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920600680Z" level=info msg="Start recovering state" Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920732992Z" level=info msg="Start event monitor" Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920759104Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920777404Z" level=info msg="Start streaming server" Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920797384Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:48:45.920807 containerd[1899]: time="2025-09-10T23:48:45.920815324Z" level=info msg="runtime interface starting up..." Sep 10 23:48:45.921308 containerd[1899]: time="2025-09-10T23:48:45.920830072Z" level=info msg="starting plugins..." Sep 10 23:48:45.921308 containerd[1899]: time="2025-09-10T23:48:45.920858008Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:48:45.921308 containerd[1899]: time="2025-09-10T23:48:45.921100120Z" level=info msg="containerd successfully booted in 0.689436s" Sep 10 23:48:45.921465 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:48:45.983855 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 10 23:48:46.022740 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:48:46.048091 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:48:46.048971 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:48:46.059499 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:48:46.093837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:48:46.109371 systemd-logind[1863]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:48:46.157721 systemd-networkd[1819]: eth0: Gained IPv6LL Sep 10 23:48:46.162439 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:48:46.179782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 10 23:48:46.207489 systemd-logind[1863]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 10 23:48:46.208056 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:48:46.225725 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:48:46.235860 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 10 23:48:46.244652 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:48:46.252827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:46.261856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:48:46.274966 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 23:48:46.281759 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:48:46.290604 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:48:46.436751 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:48:46.449461 sshd[2077]: Accepted publickey for core from 139.178.68.195 port 59188 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:46.460663 sshd-session[2077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:46.479125 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:48:46.508701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:48:46.543696 systemd-logind[1863]: New session 1 of user core. Sep 10 23:48:46.546771 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:48:46.553915 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:48:46.607822 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:48:46.617964 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:48:46.651030 (systemd)[2130]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:48:46.660493 systemd-logind[1863]: New session c1 of user core. Sep 10 23:48:46.676820 amazon-ssm-agent[2103]: Initializing new seelog logger Sep 10 23:48:46.678365 amazon-ssm-agent[2103]: New Seelog Logger Creation Complete Sep 10 23:48:46.678365 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.678365 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.678365 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 processing appconfig overrides Sep 10 23:48:46.681365 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.681365 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.681365 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 processing appconfig overrides Sep 10 23:48:46.681365 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.681365 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.681365 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 processing appconfig overrides Sep 10 23:48:46.682875 amazon-ssm-agent[2103]: 2025-09-10 23:48:46.6802 INFO Proxy environment variables: Sep 10 23:48:46.686619 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.686762 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:46.686991 amazon-ssm-agent[2103]: 2025/09/10 23:48:46 processing appconfig overrides Sep 10 23:48:46.785956 amazon-ssm-agent[2103]: 2025-09-10 23:48:46.6803 INFO https_proxy: Sep 10 23:48:46.884723 amazon-ssm-agent[2103]: 2025-09-10 23:48:46.6803 INFO http_proxy: Sep 10 23:48:46.986144 amazon-ssm-agent[2103]: 2025-09-10 23:48:46.6803 INFO no_proxy: Sep 10 23:48:47.025549 polkitd[2026]: Started polkitd version 126 Sep 10 23:48:47.028534 systemd[2130]: Queued start job for default target default.target. Sep 10 23:48:47.034557 systemd[2130]: Created slice app.slice - User Application Slice. Sep 10 23:48:47.034629 systemd[2130]: Reached target paths.target - Paths. Sep 10 23:48:47.034734 systemd[2130]: Reached target timers.target - Timers. Sep 10 23:48:47.037179 systemd[2130]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:48:47.055078 polkitd[2026]: Loading rules from directory /etc/polkit-1/rules.d Sep 10 23:48:47.055734 polkitd[2026]: Loading rules from directory /run/polkit-1/rules.d Sep 10 23:48:47.055810 polkitd[2026]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 10 23:48:47.058071 polkitd[2026]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 10 23:48:47.059486 polkitd[2026]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 10 23:48:47.059578 polkitd[2026]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 10 23:48:47.060786 polkitd[2026]: Finished loading, compiling and executing 2 rules Sep 10 23:48:47.061889 systemd[1]: Started polkit.service - Authorization Manager. Sep 10 23:48:47.069824 dbus-daemon[1850]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 10 23:48:47.073808 systemd[2130]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:48:47.075463 polkitd[2026]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 10 23:48:47.076868 systemd[2130]: Reached target sockets.target - Sockets. Sep 10 23:48:47.076990 systemd[2130]: Reached target basic.target - Basic System. Sep 10 23:48:47.077075 systemd[2130]: Reached target default.target - Main User Target. Sep 10 23:48:47.077136 systemd[2130]: Startup finished in 396ms. Sep 10 23:48:47.077140 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:48:47.086920 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:48:47.092038 amazon-ssm-agent[2103]: 2025-09-10 23:48:46.6805 INFO Checking if agent identity type OnPrem can be assumed Sep 10 23:48:47.135227 systemd-hostnamed[1935]: Hostname set to (transient) Sep 10 23:48:47.135259 systemd-resolved[1762]: System hostname changed to 'ip-172-31-30-159'. Sep 10 23:48:47.191423 amazon-ssm-agent[2103]: 2025-09-10 23:48:46.6806 INFO Checking if agent identity type EC2 can be assumed Sep 10 23:48:47.263204 systemd[1]: Started sshd@1-172.31.30.159:22-139.178.68.195:59196.service - OpenSSH per-connection server daemon (139.178.68.195:59196). Sep 10 23:48:47.291390 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0177 INFO Agent will take identity from EC2 Sep 10 23:48:47.388236 tar[1883]: linux-arm64/README.md Sep 10 23:48:47.391172 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0279 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 10 23:48:47.421437 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:48:47.490354 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0280 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 10 23:48:47.535788 sshd[2152]: Accepted publickey for core from 139.178.68.195 port 59196 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:47.539553 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:47.558413 systemd-logind[1863]: New session 2 of user core. Sep 10 23:48:47.565357 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:48:47.589758 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0280 INFO [amazon-ssm-agent] Starting Core Agent Sep 10 23:48:47.690062 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0280 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 10 23:48:47.705078 sshd[2157]: Connection closed by 139.178.68.195 port 59196 Sep 10 23:48:47.704878 sshd-session[2152]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:47.717175 systemd[1]: sshd@1-172.31.30.159:22-139.178.68.195:59196.service: Deactivated successfully. Sep 10 23:48:47.724141 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:48:47.727779 systemd-logind[1863]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:48:47.749798 systemd[1]: Started sshd@2-172.31.30.159:22-139.178.68.195:59200.service - OpenSSH per-connection server daemon (139.178.68.195:59200). Sep 10 23:48:47.763925 systemd-logind[1863]: Removed session 2. Sep 10 23:48:47.791216 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0280 INFO [Registrar] Starting registrar module Sep 10 23:48:47.891514 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0367 INFO [EC2Identity] Checking disk for registration info Sep 10 23:48:47.959876 sshd[2163]: Accepted publickey for core from 139.178.68.195 port 59200 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:47.964225 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:47.979418 systemd-logind[1863]: New session 3 of user core. Sep 10 23:48:47.984606 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:48:47.993551 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0367 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 10 23:48:48.093351 amazon-ssm-agent[2103]: 2025-09-10 23:48:47.0367 INFO [EC2Identity] Generating registration keypair Sep 10 23:48:48.122525 sshd[2165]: Connection closed by 139.178.68.195 port 59200 Sep 10 23:48:48.123021 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:48.134709 systemd-logind[1863]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:48:48.135859 systemd[1]: sshd@2-172.31.30.159:22-139.178.68.195:59200.service: Deactivated successfully. Sep 10 23:48:48.141886 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:48:48.148940 systemd-logind[1863]: Removed session 3. Sep 10 23:48:48.434603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:48:48.438456 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:48:48.443592 systemd[1]: Startup finished in 3.808s (kernel) + 12.215s (initrd) + 9.592s (userspace) = 25.616s. Sep 10 23:48:48.451146 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:48:48.888681 ntpd[1857]: Listen normally on 7 eth0 [fe80::4bb:3ff:fe8c:abc3%2]:123 Sep 10 23:48:48.890624 ntpd[1857]: 10 Sep 23:48:48 ntpd[1857]: Listen normally on 7 eth0 [fe80::4bb:3ff:fe8c:abc3%2]:123 Sep 10 23:48:49.061939 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.0616 INFO [EC2Identity] Checking write access before registering Sep 10 23:48:49.106906 amazon-ssm-agent[2103]: 2025/09/10 23:48:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:49.106906 amazon-ssm-agent[2103]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 10 23:48:49.107566 amazon-ssm-agent[2103]: 2025/09/10 23:48:49 processing appconfig overrides Sep 10 23:48:49.147698 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.0643 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 10 23:48:49.147698 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.1065 INFO [EC2Identity] EC2 registration was successful. Sep 10 23:48:49.148435 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.1066 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 10 23:48:49.148435 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.1067 INFO [CredentialRefresher] credentialRefresher has started Sep 10 23:48:49.148435 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.1067 INFO [CredentialRefresher] Starting credentials refresher loop Sep 10 23:48:49.148435 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.1470 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 10 23:48:49.148435 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.1475 INFO [CredentialRefresher] Credentials ready Sep 10 23:48:49.163146 amazon-ssm-agent[2103]: 2025-09-10 23:48:49.1483 INFO [CredentialRefresher] Next credential rotation will be in 29.9999783133 minutes Sep 10 23:48:49.387053 kubelet[2176]: E0910 23:48:49.386968 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:48:49.391677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:48:49.392412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:48:49.393283 systemd[1]: kubelet.service: Consumed 1.445s CPU time, 257M memory peak. Sep 10 23:48:50.174061 amazon-ssm-agent[2103]: 2025-09-10 23:48:50.1736 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 10 23:48:50.275453 amazon-ssm-agent[2103]: 2025-09-10 23:48:50.1766 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2189) started Sep 10 23:48:50.376641 amazon-ssm-agent[2103]: 2025-09-10 23:48:50.1766 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 10 23:48:52.094160 systemd-resolved[1762]: Clock change detected. Flushing caches. Sep 10 23:48:58.368365 systemd[1]: Started sshd@3-172.31.30.159:22-139.178.68.195:46978.service - OpenSSH per-connection server daemon (139.178.68.195:46978). Sep 10 23:48:58.580846 sshd[2201]: Accepted publickey for core from 139.178.68.195 port 46978 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:58.583213 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:58.591114 systemd-logind[1863]: New session 4 of user core. Sep 10 23:48:58.604921 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:48:58.731827 sshd[2203]: Connection closed by 139.178.68.195 port 46978 Sep 10 23:48:58.731570 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:58.738461 systemd[1]: sshd@3-172.31.30.159:22-139.178.68.195:46978.service: Deactivated successfully. Sep 10 23:48:58.741477 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:48:58.743330 systemd-logind[1863]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:48:58.747401 systemd-logind[1863]: Removed session 4. Sep 10 23:48:58.767801 systemd[1]: Started sshd@4-172.31.30.159:22-139.178.68.195:46990.service - OpenSSH per-connection server daemon (139.178.68.195:46990). Sep 10 23:48:58.969980 sshd[2209]: Accepted publickey for core from 139.178.68.195 port 46990 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:58.972423 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:58.981780 systemd-logind[1863]: New session 5 of user core. Sep 10 23:48:58.988974 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:48:59.107009 sshd[2211]: Connection closed by 139.178.68.195 port 46990 Sep 10 23:48:59.107884 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:59.115170 systemd-logind[1863]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:48:59.116434 systemd[1]: sshd@4-172.31.30.159:22-139.178.68.195:46990.service: Deactivated successfully. Sep 10 23:48:59.119602 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:48:59.123265 systemd-logind[1863]: Removed session 5. Sep 10 23:48:59.144650 systemd[1]: Started sshd@5-172.31.30.159:22-139.178.68.195:47002.service - OpenSSH per-connection server daemon (139.178.68.195:47002). Sep 10 23:48:59.352450 sshd[2217]: Accepted publickey for core from 139.178.68.195 port 47002 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:59.354554 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:59.363781 systemd-logind[1863]: New session 6 of user core. Sep 10 23:48:59.372936 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:48:59.499040 sshd[2219]: Connection closed by 139.178.68.195 port 47002 Sep 10 23:48:59.498919 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:59.505261 systemd[1]: sshd@5-172.31.30.159:22-139.178.68.195:47002.service: Deactivated successfully. Sep 10 23:48:59.508220 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:48:59.510559 systemd-logind[1863]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:48:59.513428 systemd-logind[1863]: Removed session 6. Sep 10 23:48:59.534255 systemd[1]: Started sshd@6-172.31.30.159:22-139.178.68.195:47014.service - OpenSSH per-connection server daemon (139.178.68.195:47014). Sep 10 23:48:59.695546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:48:59.699284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:48:59.730541 sshd[2225]: Accepted publickey for core from 139.178.68.195 port 47014 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:48:59.733465 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:48:59.745800 systemd-logind[1863]: New session 7 of user core. Sep 10 23:48:59.751963 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:48:59.896722 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:48:59.897336 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:48:59.915881 sudo[2231]: pam_unix(sudo:session): session closed for user root Sep 10 23:48:59.941746 sshd[2230]: Connection closed by 139.178.68.195 port 47014 Sep 10 23:48:59.941536 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Sep 10 23:48:59.951956 systemd[1]: sshd@6-172.31.30.159:22-139.178.68.195:47014.service: Deactivated successfully. Sep 10 23:48:59.955646 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:48:59.960961 systemd-logind[1863]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:48:59.987040 systemd[1]: Started sshd@7-172.31.30.159:22-139.178.68.195:56398.service - OpenSSH per-connection server daemon (139.178.68.195:56398). Sep 10 23:48:59.987854 systemd-logind[1863]: Removed session 7. Sep 10 23:49:00.116920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:00.127189 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:49:00.190252 sshd[2237]: Accepted publickey for core from 139.178.68.195 port 56398 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:00.194453 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:00.199749 kubelet[2244]: E0910 23:49:00.198889 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:49:00.205251 systemd-logind[1863]: New session 8 of user core. Sep 10 23:49:00.208286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:49:00.208994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:49:00.209648 systemd[1]: kubelet.service: Consumed 316ms CPU time, 104.3M memory peak. Sep 10 23:49:00.222195 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:49:00.326775 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:49:00.327379 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:49:00.336801 sudo[2254]: pam_unix(sudo:session): session closed for user root Sep 10 23:49:00.346204 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:49:00.346932 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:49:00.364266 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:49:00.422215 augenrules[2276]: No rules Sep 10 23:49:00.424801 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:49:00.425329 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:49:00.427201 sudo[2253]: pam_unix(sudo:session): session closed for user root Sep 10 23:49:00.450869 sshd[2252]: Connection closed by 139.178.68.195 port 56398 Sep 10 23:49:00.451604 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Sep 10 23:49:00.458934 systemd[1]: sshd@7-172.31.30.159:22-139.178.68.195:56398.service: Deactivated successfully. Sep 10 23:49:00.462452 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:49:00.465015 systemd-logind[1863]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:49:00.467973 systemd-logind[1863]: Removed session 8. Sep 10 23:49:00.496399 systemd[1]: Started sshd@8-172.31.30.159:22-139.178.68.195:56412.service - OpenSSH per-connection server daemon (139.178.68.195:56412). Sep 10 23:49:00.693149 sshd[2285]: Accepted publickey for core from 139.178.68.195 port 56412 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:49:00.695534 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:49:00.706770 systemd-logind[1863]: New session 9 of user core. Sep 10 23:49:00.713002 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:49:00.815221 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:49:00.815903 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:49:01.493300 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:49:01.510214 (dockerd)[2306]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:49:02.052090 dockerd[2306]: time="2025-09-10T23:49:02.051995161Z" level=info msg="Starting up" Sep 10 23:49:02.055012 dockerd[2306]: time="2025-09-10T23:49:02.054953581Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:49:02.113121 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2249540173-merged.mount: Deactivated successfully. Sep 10 23:49:02.161133 dockerd[2306]: time="2025-09-10T23:49:02.160943102Z" level=info msg="Loading containers: start." Sep 10 23:49:02.191715 kernel: Initializing XFRM netlink socket Sep 10 23:49:02.524246 (udev-worker)[2328]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:49:02.596568 systemd-networkd[1819]: docker0: Link UP Sep 10 23:49:02.606803 dockerd[2306]: time="2025-09-10T23:49:02.606735100Z" level=info msg="Loading containers: done." Sep 10 23:49:02.639357 dockerd[2306]: time="2025-09-10T23:49:02.639297316Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:49:02.639555 dockerd[2306]: time="2025-09-10T23:49:02.639419512Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:49:02.639613 dockerd[2306]: time="2025-09-10T23:49:02.639599212Z" level=info msg="Initializing buildkit" Sep 10 23:49:02.690136 dockerd[2306]: time="2025-09-10T23:49:02.690064960Z" level=info msg="Completed buildkit initialization" Sep 10 23:49:02.706783 dockerd[2306]: time="2025-09-10T23:49:02.706573240Z" level=info msg="Daemon has completed initialization" Sep 10 23:49:02.707335 dockerd[2306]: time="2025-09-10T23:49:02.706982092Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:49:02.707081 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:49:03.107377 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1307070315-merged.mount: Deactivated successfully. Sep 10 23:49:03.807799 containerd[1899]: time="2025-09-10T23:49:03.807654882Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 10 23:49:04.468827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441514241.mount: Deactivated successfully. Sep 10 23:49:05.923724 containerd[1899]: time="2025-09-10T23:49:05.923200640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:05.925865 containerd[1899]: time="2025-09-10T23:49:05.925822316Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Sep 10 23:49:05.928411 containerd[1899]: time="2025-09-10T23:49:05.928369976Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:05.933843 containerd[1899]: time="2025-09-10T23:49:05.933776120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:05.935986 containerd[1899]: time="2025-09-10T23:49:05.935772392Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.128035778s" Sep 10 23:49:05.935986 containerd[1899]: time="2025-09-10T23:49:05.935825792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 10 23:49:05.938238 containerd[1899]: time="2025-09-10T23:49:05.938180096Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 10 23:49:07.437052 containerd[1899]: time="2025-09-10T23:49:07.436988612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:07.438728 containerd[1899]: time="2025-09-10T23:49:07.438634808Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Sep 10 23:49:07.439637 containerd[1899]: time="2025-09-10T23:49:07.439584188Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:07.445716 containerd[1899]: time="2025-09-10T23:49:07.444276332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:07.446536 containerd[1899]: time="2025-09-10T23:49:07.446494088Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.508254052s" Sep 10 23:49:07.446702 containerd[1899]: time="2025-09-10T23:49:07.446654516Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 10 23:49:07.450894 containerd[1899]: time="2025-09-10T23:49:07.450826268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 10 23:49:08.643720 containerd[1899]: time="2025-09-10T23:49:08.642490162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:08.644640 containerd[1899]: time="2025-09-10T23:49:08.644579506Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Sep 10 23:49:08.645041 containerd[1899]: time="2025-09-10T23:49:08.645005398Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:08.649798 containerd[1899]: time="2025-09-10T23:49:08.649736170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:08.652149 containerd[1899]: time="2025-09-10T23:49:08.652103326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.201016334s" Sep 10 23:49:08.652314 containerd[1899]: time="2025-09-10T23:49:08.652287154Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 10 23:49:08.653054 containerd[1899]: time="2025-09-10T23:49:08.653015602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 10 23:49:09.871197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416069667.mount: Deactivated successfully. Sep 10 23:49:10.459383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:49:10.464136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:10.513624 containerd[1899]: time="2025-09-10T23:49:10.513545387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:10.515753 containerd[1899]: time="2025-09-10T23:49:10.515306339Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Sep 10 23:49:10.522049 containerd[1899]: time="2025-09-10T23:49:10.521962391Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:10.530486 containerd[1899]: time="2025-09-10T23:49:10.530399303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:10.532755 containerd[1899]: time="2025-09-10T23:49:10.531671459Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.878463941s" Sep 10 23:49:10.533422 containerd[1899]: time="2025-09-10T23:49:10.532956911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 10 23:49:10.533946 containerd[1899]: time="2025-09-10T23:49:10.533905451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 23:49:10.817563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:10.840416 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:49:10.910427 kubelet[2591]: E0910 23:49:10.910369 2591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:49:10.915198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:49:10.915505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:49:10.916931 systemd[1]: kubelet.service: Consumed 306ms CPU time, 105.1M memory peak. Sep 10 23:49:11.187268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488476958.mount: Deactivated successfully. Sep 10 23:49:12.484719 containerd[1899]: time="2025-09-10T23:49:12.483005425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:12.485440 containerd[1899]: time="2025-09-10T23:49:12.485397829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 10 23:49:12.487600 containerd[1899]: time="2025-09-10T23:49:12.487557085Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:12.494884 containerd[1899]: time="2025-09-10T23:49:12.494833777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:12.496041 containerd[1899]: time="2025-09-10T23:49:12.495978925Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.96175113s" Sep 10 23:49:12.496041 containerd[1899]: time="2025-09-10T23:49:12.496038529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 23:49:12.497911 containerd[1899]: time="2025-09-10T23:49:12.497873797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:49:13.019849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239250877.mount: Deactivated successfully. Sep 10 23:49:13.033726 containerd[1899]: time="2025-09-10T23:49:13.033325188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:49:13.036201 containerd[1899]: time="2025-09-10T23:49:13.036162396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 10 23:49:13.038393 containerd[1899]: time="2025-09-10T23:49:13.038357100Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:49:13.044718 containerd[1899]: time="2025-09-10T23:49:13.043902612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:49:13.044920 containerd[1899]: time="2025-09-10T23:49:13.044884392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 546.811035ms" Sep 10 23:49:13.045033 containerd[1899]: time="2025-09-10T23:49:13.045006228Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:49:13.046307 containerd[1899]: time="2025-09-10T23:49:13.046222428Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 23:49:13.591515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435509128.mount: Deactivated successfully. Sep 10 23:49:15.684753 containerd[1899]: time="2025-09-10T23:49:15.684210593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:15.686436 containerd[1899]: time="2025-09-10T23:49:15.686354585Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Sep 10 23:49:15.688983 containerd[1899]: time="2025-09-10T23:49:15.688928969Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:15.694611 containerd[1899]: time="2025-09-10T23:49:15.694535333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:15.697124 containerd[1899]: time="2025-09-10T23:49:15.696586301Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.650089481s" Sep 10 23:49:15.697124 containerd[1899]: time="2025-09-10T23:49:15.696643925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 23:49:17.378021 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 10 23:49:21.096278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 23:49:21.101844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:21.443932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:21.456171 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:49:21.528821 kubelet[2742]: E0910 23:49:21.528749 2742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:49:21.533205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:49:21.533529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:49:21.536803 systemd[1]: kubelet.service: Consumed 289ms CPU time, 104.8M memory peak. Sep 10 23:49:24.036940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:24.037290 systemd[1]: kubelet.service: Consumed 289ms CPU time, 104.8M memory peak. Sep 10 23:49:24.043170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:24.095169 systemd[1]: Reload requested from client PID 2756 ('systemctl') (unit session-9.scope)... Sep 10 23:49:24.095403 systemd[1]: Reloading... Sep 10 23:49:24.344739 zram_generator::config[2806]: No configuration found. Sep 10 23:49:24.540364 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:49:24.801587 systemd[1]: Reloading finished in 705 ms. Sep 10 23:49:24.919646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:24.928150 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:49:24.928837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:24.928999 systemd[1]: kubelet.service: Consumed 237ms CPU time, 95M memory peak. Sep 10 23:49:24.932304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:25.267492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:25.286264 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:49:25.355771 kubelet[2865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:25.355771 kubelet[2865]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:49:25.355771 kubelet[2865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:25.355771 kubelet[2865]: I0910 23:49:25.355183 2865 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:49:26.179131 kubelet[2865]: I0910 23:49:26.179056 2865 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:49:26.179131 kubelet[2865]: I0910 23:49:26.179109 2865 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:49:26.179716 kubelet[2865]: I0910 23:49:26.179548 2865 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:49:26.232122 kubelet[2865]: E0910 23:49:26.232039 2865 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 23:49:26.241548 kubelet[2865]: I0910 23:49:26.240271 2865 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:49:26.256824 kubelet[2865]: I0910 23:49:26.256665 2865 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:49:26.263565 kubelet[2865]: I0910 23:49:26.263511 2865 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:49:26.266155 kubelet[2865]: I0910 23:49:26.266060 2865 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:49:26.266431 kubelet[2865]: I0910 23:49:26.266143 2865 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:49:26.266611 kubelet[2865]: I0910 23:49:26.266561 2865 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:49:26.266611 kubelet[2865]: I0910 23:49:26.266584 2865 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:49:26.268447 kubelet[2865]: I0910 23:49:26.268369 2865 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:26.275564 kubelet[2865]: I0910 23:49:26.275513 2865 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:49:26.275564 kubelet[2865]: I0910 23:49:26.275568 2865 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:49:26.277818 kubelet[2865]: I0910 23:49:26.275614 2865 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:49:26.277818 kubelet[2865]: I0910 23:49:26.275641 2865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:49:26.284837 kubelet[2865]: E0910 23:49:26.284776 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 23:49:26.285022 kubelet[2865]: I0910 23:49:26.284954 2865 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:49:26.286731 kubelet[2865]: I0910 23:49:26.286242 2865 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:49:26.286731 kubelet[2865]: W0910 23:49:26.286506 2865 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:49:26.292029 kubelet[2865]: I0910 23:49:26.291966 2865 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:49:26.292185 kubelet[2865]: I0910 23:49:26.292049 2865 server.go:1289] "Started kubelet" Sep 10 23:49:26.294470 kubelet[2865]: E0910 23:49:26.294429 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-159&limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 23:49:26.294802 kubelet[2865]: I0910 23:49:26.294723 2865 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:49:26.299111 kubelet[2865]: I0910 23:49:26.297862 2865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:49:26.299111 kubelet[2865]: I0910 23:49:26.298853 2865 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:49:26.299111 kubelet[2865]: I0910 23:49:26.298885 2865 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:49:26.302199 kubelet[2865]: I0910 23:49:26.302152 2865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:49:26.307299 kubelet[2865]: E0910 23:49:26.305027 2865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.159:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.159:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-159.186410becf796749 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-159,UID:ip-172-31-30-159,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-159,},FirstTimestamp:2025-09-10 23:49:26.292006729 +0000 UTC m=+0.997901610,LastTimestamp:2025-09-10 23:49:26.292006729 +0000 UTC m=+0.997901610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-159,}" Sep 10 23:49:26.307597 kubelet[2865]: I0910 23:49:26.307554 2865 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:49:26.309271 kubelet[2865]: I0910 23:49:26.309231 2865 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:49:26.310231 kubelet[2865]: E0910 23:49:26.310188 2865 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-159\" not found" Sep 10 23:49:26.311387 kubelet[2865]: I0910 23:49:26.311356 2865 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:49:26.311825 kubelet[2865]: I0910 23:49:26.311803 2865 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:49:26.319966 kubelet[2865]: E0910 23:49:26.319886 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-159?timeout=10s\": dial tcp 172.31.30.159:6443: connect: connection refused" interval="200ms" Sep 10 23:49:26.325366 kubelet[2865]: E0910 23:49:26.325283 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 23:49:26.325705 kubelet[2865]: I0910 23:49:26.325637 2865 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:49:26.325924 kubelet[2865]: I0910 23:49:26.325862 2865 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:49:26.338749 kubelet[2865]: I0910 23:49:26.338640 2865 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:49:26.340583 kubelet[2865]: E0910 23:49:26.340518 2865 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:49:26.376275 kubelet[2865]: I0910 23:49:26.376202 2865 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:49:26.376275 kubelet[2865]: I0910 23:49:26.376233 2865 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:49:26.377403 kubelet[2865]: I0910 23:49:26.376749 2865 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:26.384336 kubelet[2865]: I0910 23:49:26.384251 2865 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:49:26.387931 kubelet[2865]: I0910 23:49:26.387856 2865 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:49:26.387931 kubelet[2865]: I0910 23:49:26.387904 2865 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:49:26.387931 kubelet[2865]: I0910 23:49:26.387938 2865 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:49:26.388165 kubelet[2865]: I0910 23:49:26.387952 2865 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:49:26.388165 kubelet[2865]: E0910 23:49:26.388020 2865 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:49:26.393435 kubelet[2865]: I0910 23:49:26.392991 2865 policy_none.go:49] "None policy: Start" Sep 10 23:49:26.393435 kubelet[2865]: I0910 23:49:26.393033 2865 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:49:26.393435 kubelet[2865]: I0910 23:49:26.393057 2865 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:49:26.397165 kubelet[2865]: E0910 23:49:26.396810 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 23:49:26.409874 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:49:26.412943 kubelet[2865]: E0910 23:49:26.411658 2865 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-159\" not found" Sep 10 23:49:26.433311 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:49:26.442091 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:49:26.453944 kubelet[2865]: E0910 23:49:26.453885 2865 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:49:26.454366 kubelet[2865]: I0910 23:49:26.454228 2865 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:49:26.454366 kubelet[2865]: I0910 23:49:26.454261 2865 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:49:26.457006 kubelet[2865]: I0910 23:49:26.456827 2865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:49:26.461502 kubelet[2865]: E0910 23:49:26.461450 2865 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:49:26.461676 kubelet[2865]: E0910 23:49:26.461527 2865 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-159\" not found" Sep 10 23:49:26.513154 systemd[1]: Created slice kubepods-burstable-podcacc31db06083ea8204d4054b18f0a22.slice - libcontainer container kubepods-burstable-podcacc31db06083ea8204d4054b18f0a22.slice. Sep 10 23:49:26.513544 kubelet[2865]: I0910 23:49:26.513493 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2109b52f0cee454c01a7e1681ee83d9c-ca-certs\") pod \"kube-apiserver-ip-172-31-30-159\" (UID: \"2109b52f0cee454c01a7e1681ee83d9c\") " pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:26.513637 kubelet[2865]: I0910 23:49:26.513560 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2109b52f0cee454c01a7e1681ee83d9c-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-159\" (UID: \"2109b52f0cee454c01a7e1681ee83d9c\") " pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:26.513637 kubelet[2865]: I0910 23:49:26.513603 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2109b52f0cee454c01a7e1681ee83d9c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-159\" (UID: \"2109b52f0cee454c01a7e1681ee83d9c\") " pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:26.513813 kubelet[2865]: I0910 23:49:26.513644 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:26.514232 kubelet[2865]: I0910 23:49:26.513980 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:26.514232 kubelet[2865]: I0910 23:49:26.514046 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:26.514232 kubelet[2865]: I0910 23:49:26.514089 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:26.514232 kubelet[2865]: I0910 23:49:26.514126 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:26.514232 kubelet[2865]: I0910 23:49:26.514168 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95bdcf84545ec6820f2aa7dd1f16ce95-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-159\" (UID: \"95bdcf84545ec6820f2aa7dd1f16ce95\") " pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:26.521354 kubelet[2865]: E0910 23:49:26.521273 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-159?timeout=10s\": dial tcp 172.31.30.159:6443: connect: connection refused" interval="400ms" Sep 10 23:49:26.532609 kubelet[2865]: E0910 23:49:26.532549 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:26.539630 systemd[1]: Created slice kubepods-burstable-pod95bdcf84545ec6820f2aa7dd1f16ce95.slice - libcontainer container kubepods-burstable-pod95bdcf84545ec6820f2aa7dd1f16ce95.slice. Sep 10 23:49:26.561910 kubelet[2865]: I0910 23:49:26.561832 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-159" Sep 10 23:49:26.562736 kubelet[2865]: E0910 23:49:26.562558 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:26.563869 kubelet[2865]: E0910 23:49:26.563814 2865 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.159:6443/api/v1/nodes\": dial tcp 172.31.30.159:6443: connect: connection refused" node="ip-172-31-30-159" Sep 10 23:49:26.568558 systemd[1]: Created slice kubepods-burstable-pod2109b52f0cee454c01a7e1681ee83d9c.slice - libcontainer container kubepods-burstable-pod2109b52f0cee454c01a7e1681ee83d9c.slice. Sep 10 23:49:26.573987 kubelet[2865]: E0910 23:49:26.573938 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:26.767969 kubelet[2865]: I0910 23:49:26.767386 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-159" Sep 10 23:49:26.767969 kubelet[2865]: E0910 23:49:26.767864 2865 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.159:6443/api/v1/nodes\": dial tcp 172.31.30.159:6443: connect: connection refused" node="ip-172-31-30-159" Sep 10 23:49:26.835025 containerd[1899]: time="2025-09-10T23:49:26.834964288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-159,Uid:cacc31db06083ea8204d4054b18f0a22,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:26.865984 containerd[1899]: time="2025-09-10T23:49:26.865571200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-159,Uid:95bdcf84545ec6820f2aa7dd1f16ce95,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:26.877070 containerd[1899]: time="2025-09-10T23:49:26.876998632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-159,Uid:2109b52f0cee454c01a7e1681ee83d9c,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:26.883933 containerd[1899]: time="2025-09-10T23:49:26.883859776Z" level=info msg="connecting to shim d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b" address="unix:///run/containerd/s/139459ee8625e52d3dbe1b1abca9663e1679ef2090f1a19883bbd38eaf0aaf54" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:26.924705 kubelet[2865]: E0910 23:49:26.923568 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-159?timeout=10s\": dial tcp 172.31.30.159:6443: connect: connection refused" interval="800ms" Sep 10 23:49:26.963008 containerd[1899]: time="2025-09-10T23:49:26.962935397Z" level=info msg="connecting to shim c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328" address="unix:///run/containerd/s/5b05ec4aee15e5416bd5bd1c94c88c0ba506d2aa7c9e8bb07f17c66cca3e5419" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:26.966029 systemd[1]: Started cri-containerd-d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b.scope - libcontainer container d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b. Sep 10 23:49:26.984785 containerd[1899]: time="2025-09-10T23:49:26.984535289Z" level=info msg="connecting to shim c54a96023b898ae7450080b8085693e2347c42c281feae8aba25906d707cd417" address="unix:///run/containerd/s/ed253399b6acaf9540f4b792339f01d5ae65c063a0017188c4981ad39b6acc8c" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:27.063250 systemd[1]: Started cri-containerd-c54a96023b898ae7450080b8085693e2347c42c281feae8aba25906d707cd417.scope - libcontainer container c54a96023b898ae7450080b8085693e2347c42c281feae8aba25906d707cd417. Sep 10 23:49:27.070746 systemd[1]: Started cri-containerd-c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328.scope - libcontainer container c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328. Sep 10 23:49:27.113362 containerd[1899]: time="2025-09-10T23:49:27.113288269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-159,Uid:cacc31db06083ea8204d4054b18f0a22,Namespace:kube-system,Attempt:0,} returns sandbox id \"d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b\"" Sep 10 23:49:27.130158 containerd[1899]: time="2025-09-10T23:49:27.130037294Z" level=info msg="CreateContainer within sandbox \"d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:49:27.149214 containerd[1899]: time="2025-09-10T23:49:27.149162342Z" level=info msg="Container 5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:27.171601 kubelet[2865]: I0910 23:49:27.171568 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-159" Sep 10 23:49:27.173004 kubelet[2865]: E0910 23:49:27.172937 2865 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.159:6443/api/v1/nodes\": dial tcp 172.31.30.159:6443: connect: connection refused" node="ip-172-31-30-159" Sep 10 23:49:27.177331 containerd[1899]: time="2025-09-10T23:49:27.177265970Z" level=info msg="CreateContainer within sandbox \"d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f\"" Sep 10 23:49:27.182646 containerd[1899]: time="2025-09-10T23:49:27.182484302Z" level=info msg="StartContainer for \"5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f\"" Sep 10 23:49:27.190649 containerd[1899]: time="2025-09-10T23:49:27.190484522Z" level=info msg="connecting to shim 5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f" address="unix:///run/containerd/s/139459ee8625e52d3dbe1b1abca9663e1679ef2090f1a19883bbd38eaf0aaf54" protocol=ttrpc version=3 Sep 10 23:49:27.209156 kubelet[2865]: E0910 23:49:27.209087 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 23:49:27.211192 containerd[1899]: time="2025-09-10T23:49:27.211124078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-159,Uid:2109b52f0cee454c01a7e1681ee83d9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c54a96023b898ae7450080b8085693e2347c42c281feae8aba25906d707cd417\"" Sep 10 23:49:27.227476 containerd[1899]: time="2025-09-10T23:49:27.227411510Z" level=info msg="CreateContainer within sandbox \"c54a96023b898ae7450080b8085693e2347c42c281feae8aba25906d707cd417\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:49:27.231013 containerd[1899]: time="2025-09-10T23:49:27.230939546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-159,Uid:95bdcf84545ec6820f2aa7dd1f16ce95,Namespace:kube-system,Attempt:0,} returns sandbox id \"c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328\"" Sep 10 23:49:27.241583 containerd[1899]: time="2025-09-10T23:49:27.241204442Z" level=info msg="CreateContainer within sandbox \"c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:49:27.252977 containerd[1899]: time="2025-09-10T23:49:27.252887954Z" level=info msg="Container 568b972ca78c3f3686722ac5fa46ee9b9685ca47a88fbcd95dbdd344d01706d9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:27.254049 systemd[1]: Started cri-containerd-5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f.scope - libcontainer container 5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f. Sep 10 23:49:27.271996 containerd[1899]: time="2025-09-10T23:49:27.271821806Z" level=info msg="CreateContainer within sandbox \"c54a96023b898ae7450080b8085693e2347c42c281feae8aba25906d707cd417\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"568b972ca78c3f3686722ac5fa46ee9b9685ca47a88fbcd95dbdd344d01706d9\"" Sep 10 23:49:27.273642 containerd[1899]: time="2025-09-10T23:49:27.272855330Z" level=info msg="StartContainer for \"568b972ca78c3f3686722ac5fa46ee9b9685ca47a88fbcd95dbdd344d01706d9\"" Sep 10 23:49:27.275092 containerd[1899]: time="2025-09-10T23:49:27.275007254Z" level=info msg="Container c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:27.276258 containerd[1899]: time="2025-09-10T23:49:27.276080258Z" level=info msg="connecting to shim 568b972ca78c3f3686722ac5fa46ee9b9685ca47a88fbcd95dbdd344d01706d9" address="unix:///run/containerd/s/ed253399b6acaf9540f4b792339f01d5ae65c063a0017188c4981ad39b6acc8c" protocol=ttrpc version=3 Sep 10 23:49:27.297520 containerd[1899]: time="2025-09-10T23:49:27.297459062Z" level=info msg="CreateContainer within sandbox \"c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e\"" Sep 10 23:49:27.299733 containerd[1899]: time="2025-09-10T23:49:27.298712186Z" level=info msg="StartContainer for \"c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e\"" Sep 10 23:49:27.301855 containerd[1899]: time="2025-09-10T23:49:27.301803314Z" level=info msg="connecting to shim c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e" address="unix:///run/containerd/s/5b05ec4aee15e5416bd5bd1c94c88c0ba506d2aa7c9e8bb07f17c66cca3e5419" protocol=ttrpc version=3 Sep 10 23:49:27.332987 systemd[1]: Started cri-containerd-568b972ca78c3f3686722ac5fa46ee9b9685ca47a88fbcd95dbdd344d01706d9.scope - libcontainer container 568b972ca78c3f3686722ac5fa46ee9b9685ca47a88fbcd95dbdd344d01706d9. Sep 10 23:49:27.374953 systemd[1]: Started cri-containerd-c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e.scope - libcontainer container c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e. Sep 10 23:49:27.428656 kubelet[2865]: E0910 23:49:27.428594 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-159&limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 23:49:27.452449 containerd[1899]: time="2025-09-10T23:49:27.452083491Z" level=info msg="StartContainer for \"5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f\" returns successfully" Sep 10 23:49:27.511325 containerd[1899]: time="2025-09-10T23:49:27.511276611Z" level=info msg="StartContainer for \"568b972ca78c3f3686722ac5fa46ee9b9685ca47a88fbcd95dbdd344d01706d9\" returns successfully" Sep 10 23:49:27.515662 kubelet[2865]: E0910 23:49:27.515587 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 23:49:27.517361 kubelet[2865]: E0910 23:49:27.517295 2865 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 23:49:27.604135 containerd[1899]: time="2025-09-10T23:49:27.603977776Z" level=info msg="StartContainer for \"c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e\" returns successfully" Sep 10 23:49:27.975880 kubelet[2865]: I0910 23:49:27.975748 2865 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-159" Sep 10 23:49:28.448677 kubelet[2865]: E0910 23:49:28.448627 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:28.462910 kubelet[2865]: E0910 23:49:28.462861 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:28.467958 kubelet[2865]: E0910 23:49:28.467913 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:29.467466 kubelet[2865]: E0910 23:49:29.467396 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:29.469092 kubelet[2865]: E0910 23:49:29.469046 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:29.471240 kubelet[2865]: E0910 23:49:29.471189 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:30.034827 update_engine[1867]: I20250910 23:49:30.034730 1867 update_attempter.cc:509] Updating boot flags... Sep 10 23:49:30.488300 kubelet[2865]: E0910 23:49:30.488264 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:30.491886 kubelet[2865]: E0910 23:49:30.490853 2865 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-159\" not found" node="ip-172-31-30-159" Sep 10 23:49:32.461402 kubelet[2865]: I0910 23:49:32.461350 2865 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-159" Sep 10 23:49:32.462784 kubelet[2865]: E0910 23:49:32.462756 2865 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-159\": node \"ip-172-31-30-159\" not found" Sep 10 23:49:32.494735 kubelet[2865]: I0910 23:49:32.493532 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:32.511195 kubelet[2865]: I0910 23:49:32.511155 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:32.641508 kubelet[2865]: E0910 23:49:32.641004 2865 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:32.641508 kubelet[2865]: E0910 23:49:32.641377 2865 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:32.641508 kubelet[2865]: I0910 23:49:32.641408 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:32.664073 kubelet[2865]: E0910 23:49:32.664024 2865 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:32.664571 kubelet[2865]: I0910 23:49:32.664279 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:32.671075 kubelet[2865]: E0910 23:49:32.671010 2865 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:32.678762 kubelet[2865]: E0910 23:49:32.678711 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Sep 10 23:49:33.281249 kubelet[2865]: I0910 23:49:33.281192 2865 apiserver.go:52] "Watching apiserver" Sep 10 23:49:33.311991 kubelet[2865]: I0910 23:49:33.311931 2865 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:49:34.555917 kubelet[2865]: I0910 23:49:34.555855 2865 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:34.773274 systemd[1]: Reload requested from client PID 3330 ('systemctl') (unit session-9.scope)... Sep 10 23:49:34.773305 systemd[1]: Reloading... Sep 10 23:49:34.972730 zram_generator::config[3374]: No configuration found. Sep 10 23:49:35.175322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:49:35.503659 systemd[1]: Reloading finished in 729 ms. Sep 10 23:49:35.560249 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:35.575294 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:49:35.576460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:35.576772 systemd[1]: kubelet.service: Consumed 1.814s CPU time, 130.1M memory peak. Sep 10 23:49:35.584245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:49:35.973263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:49:35.988383 (kubelet)[3434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:49:36.076747 kubelet[3434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:36.077726 kubelet[3434]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:49:36.077726 kubelet[3434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:49:36.077726 kubelet[3434]: I0910 23:49:36.077352 3434 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:49:36.090580 kubelet[3434]: I0910 23:49:36.090517 3434 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:49:36.091851 kubelet[3434]: I0910 23:49:36.090738 3434 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:49:36.091851 kubelet[3434]: I0910 23:49:36.091298 3434 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:49:36.094287 kubelet[3434]: I0910 23:49:36.094249 3434 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 23:49:36.109973 kubelet[3434]: I0910 23:49:36.109929 3434 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:49:36.123617 kubelet[3434]: I0910 23:49:36.123584 3434 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:49:36.133702 kubelet[3434]: I0910 23:49:36.133622 3434 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:49:36.134386 kubelet[3434]: I0910 23:49:36.134338 3434 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:49:36.134802 kubelet[3434]: I0910 23:49:36.134501 3434 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:49:36.135463 kubelet[3434]: I0910 23:49:36.135085 3434 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:49:36.135463 kubelet[3434]: I0910 23:49:36.135118 3434 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:49:36.135463 kubelet[3434]: I0910 23:49:36.135205 3434 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:36.136375 kubelet[3434]: I0910 23:49:36.136346 3434 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:49:36.136554 kubelet[3434]: I0910 23:49:36.136534 3434 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:49:36.136750 kubelet[3434]: I0910 23:49:36.136665 3434 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:49:36.137747 kubelet[3434]: I0910 23:49:36.136864 3434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:49:36.143952 kubelet[3434]: I0910 23:49:36.143914 3434 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:49:36.147833 kubelet[3434]: I0910 23:49:36.147784 3434 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:49:36.159882 kubelet[3434]: I0910 23:49:36.159113 3434 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:49:36.160126 kubelet[3434]: I0910 23:49:36.160103 3434 server.go:1289] "Started kubelet" Sep 10 23:49:36.161514 sudo[3448]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 23:49:36.162377 sudo[3448]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 23:49:36.167797 kubelet[3434]: I0910 23:49:36.167529 3434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:49:36.175084 kubelet[3434]: I0910 23:49:36.175004 3434 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:49:36.182241 kubelet[3434]: I0910 23:49:36.182188 3434 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:49:36.187354 kubelet[3434]: I0910 23:49:36.187259 3434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:49:36.192795 kubelet[3434]: I0910 23:49:36.190674 3434 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:49:36.194669 kubelet[3434]: I0910 23:49:36.194616 3434 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:49:36.222244 kubelet[3434]: I0910 23:49:36.222196 3434 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:49:36.222624 kubelet[3434]: E0910 23:49:36.222581 3434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-159\" not found" Sep 10 23:49:36.249229 kubelet[3434]: I0910 23:49:36.248007 3434 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:49:36.250562 kubelet[3434]: I0910 23:49:36.250106 3434 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:49:36.256931 kubelet[3434]: I0910 23:49:36.256065 3434 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:49:36.256931 kubelet[3434]: I0910 23:49:36.256219 3434 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:49:36.268271 kubelet[3434]: I0910 23:49:36.267206 3434 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:49:36.281738 kubelet[3434]: E0910 23:49:36.280371 3434 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:49:36.292223 kubelet[3434]: I0910 23:49:36.291562 3434 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:49:36.304697 kubelet[3434]: I0910 23:49:36.304626 3434 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:49:36.304845 kubelet[3434]: I0910 23:49:36.304791 3434 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:49:36.304845 kubelet[3434]: I0910 23:49:36.304831 3434 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:49:36.304949 kubelet[3434]: I0910 23:49:36.304868 3434 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:49:36.305006 kubelet[3434]: E0910 23:49:36.304965 3434 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:49:36.405298 kubelet[3434]: E0910 23:49:36.405244 3434 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:49:36.458280 kubelet[3434]: I0910 23:49:36.458221 3434 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:49:36.458280 kubelet[3434]: I0910 23:49:36.458269 3434 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:49:36.458473 kubelet[3434]: I0910 23:49:36.458308 3434 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:49:36.459704 kubelet[3434]: I0910 23:49:36.458526 3434 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:49:36.459704 kubelet[3434]: I0910 23:49:36.458557 3434 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:49:36.459704 kubelet[3434]: I0910 23:49:36.458592 3434 policy_none.go:49] "None policy: Start" Sep 10 23:49:36.459704 kubelet[3434]: I0910 23:49:36.458610 3434 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:49:36.459704 kubelet[3434]: I0910 23:49:36.458630 3434 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:49:36.459704 kubelet[3434]: I0910 23:49:36.459123 3434 state_mem.go:75] "Updated machine memory state" Sep 10 23:49:36.472550 kubelet[3434]: E0910 23:49:36.472508 3434 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:49:36.479378 kubelet[3434]: I0910 23:49:36.478805 3434 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:49:36.479378 kubelet[3434]: I0910 23:49:36.478852 3434 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:49:36.479876 kubelet[3434]: I0910 23:49:36.479855 3434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:49:36.487729 kubelet[3434]: E0910 23:49:36.487207 3434 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:49:36.603470 kubelet[3434]: I0910 23:49:36.602235 3434 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-159" Sep 10 23:49:36.607117 kubelet[3434]: I0910 23:49:36.607052 3434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:36.609347 kubelet[3434]: I0910 23:49:36.609289 3434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:36.612413 kubelet[3434]: I0910 23:49:36.611108 3434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:36.631855 kubelet[3434]: E0910 23:49:36.631799 3434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-159\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:36.635168 kubelet[3434]: I0910 23:49:36.634714 3434 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-159" Sep 10 23:49:36.635168 kubelet[3434]: I0910 23:49:36.634818 3434 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-159" Sep 10 23:49:36.653040 kubelet[3434]: I0910 23:49:36.652996 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2109b52f0cee454c01a7e1681ee83d9c-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-159\" (UID: \"2109b52f0cee454c01a7e1681ee83d9c\") " pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:36.654846 kubelet[3434]: I0910 23:49:36.654809 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:36.655042 kubelet[3434]: I0910 23:49:36.655013 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:36.655211 kubelet[3434]: I0910 23:49:36.655184 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:36.655411 kubelet[3434]: I0910 23:49:36.655383 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95bdcf84545ec6820f2aa7dd1f16ce95-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-159\" (UID: \"95bdcf84545ec6820f2aa7dd1f16ce95\") " pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:36.655572 kubelet[3434]: I0910 23:49:36.655547 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2109b52f0cee454c01a7e1681ee83d9c-ca-certs\") pod \"kube-apiserver-ip-172-31-30-159\" (UID: \"2109b52f0cee454c01a7e1681ee83d9c\") " pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:36.655733 kubelet[3434]: I0910 23:49:36.655701 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2109b52f0cee454c01a7e1681ee83d9c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-159\" (UID: \"2109b52f0cee454c01a7e1681ee83d9c\") " pod="kube-system/kube-apiserver-ip-172-31-30-159" Sep 10 23:49:36.657710 kubelet[3434]: I0910 23:49:36.655866 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:36.657982 kubelet[3434]: I0910 23:49:36.657945 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacc31db06083ea8204d4054b18f0a22-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-159\" (UID: \"cacc31db06083ea8204d4054b18f0a22\") " pod="kube-system/kube-controller-manager-ip-172-31-30-159" Sep 10 23:49:37.103348 sudo[3448]: pam_unix(sudo:session): session closed for user root Sep 10 23:49:37.138136 kubelet[3434]: I0910 23:49:37.138070 3434 apiserver.go:52] "Watching apiserver" Sep 10 23:49:37.148510 kubelet[3434]: I0910 23:49:37.148385 3434 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:49:37.368383 kubelet[3434]: I0910 23:49:37.368240 3434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:37.378134 kubelet[3434]: E0910 23:49:37.378064 3434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-159\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-159" Sep 10 23:49:37.457462 kubelet[3434]: I0910 23:49:37.455903 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-159" podStartSLOduration=1.455883193 podStartE2EDuration="1.455883193s" podCreationTimestamp="2025-09-10 23:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:37.455839141 +0000 UTC m=+1.454440172" watchObservedRunningTime="2025-09-10 23:49:37.455883193 +0000 UTC m=+1.454484200" Sep 10 23:49:37.457462 kubelet[3434]: I0910 23:49:37.456061 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-159" podStartSLOduration=1.456051661 podStartE2EDuration="1.456051661s" podCreationTimestamp="2025-09-10 23:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:37.438845389 +0000 UTC m=+1.437446408" watchObservedRunningTime="2025-09-10 23:49:37.456051661 +0000 UTC m=+1.454652668" Sep 10 23:49:37.497717 kubelet[3434]: I0910 23:49:37.496419 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-159" podStartSLOduration=3.496396921 podStartE2EDuration="3.496396921s" podCreationTimestamp="2025-09-10 23:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:37.475710709 +0000 UTC m=+1.474311740" watchObservedRunningTime="2025-09-10 23:49:37.496396921 +0000 UTC m=+1.494997928" Sep 10 23:49:40.157666 kubelet[3434]: I0910 23:49:40.157583 3434 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:49:40.161306 containerd[1899]: time="2025-09-10T23:49:40.161231450Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:49:40.165239 kubelet[3434]: I0910 23:49:40.161639 3434 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:49:40.951792 sudo[2288]: pam_unix(sudo:session): session closed for user root Sep 10 23:49:40.975766 sshd[2287]: Connection closed by 139.178.68.195 port 56412 Sep 10 23:49:40.976529 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Sep 10 23:49:40.986296 systemd[1]: sshd@8-172.31.30.159:22-139.178.68.195:56412.service: Deactivated successfully. Sep 10 23:49:40.996172 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:49:40.998805 systemd[1]: session-9.scope: Consumed 13.076s CPU time, 274.2M memory peak. Sep 10 23:49:41.002138 systemd-logind[1863]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:49:41.008136 systemd-logind[1863]: Removed session 9. Sep 10 23:49:41.209126 systemd[1]: Created slice kubepods-besteffort-pod438f04cb_6d76_4edd_b685_70d4458bfffe.slice - libcontainer container kubepods-besteffort-pod438f04cb_6d76_4edd_b685_70d4458bfffe.slice. Sep 10 23:49:41.248755 systemd[1]: Created slice kubepods-burstable-pod4ceb51d9_ff1e_4c52_ae2e_5bad8350c826.slice - libcontainer container kubepods-burstable-pod4ceb51d9_ff1e_4c52_ae2e_5bad8350c826.slice. Sep 10 23:49:41.287630 kubelet[3434]: I0910 23:49:41.287560 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/438f04cb-6d76-4edd-b685-70d4458bfffe-xtables-lock\") pod \"kube-proxy-g7kkk\" (UID: \"438f04cb-6d76-4edd-b685-70d4458bfffe\") " pod="kube-system/kube-proxy-g7kkk" Sep 10 23:49:41.287630 kubelet[3434]: I0910 23:49:41.287633 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hostproc\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.289274 kubelet[3434]: I0910 23:49:41.287679 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-lib-modules\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.289274 kubelet[3434]: I0910 23:49:41.288559 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-clustermesh-secrets\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.289274 kubelet[3434]: I0910 23:49:41.288638 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/438f04cb-6d76-4edd-b685-70d4458bfffe-kube-proxy\") pod \"kube-proxy-g7kkk\" (UID: \"438f04cb-6d76-4edd-b685-70d4458bfffe\") " pod="kube-system/kube-proxy-g7kkk" Sep 10 23:49:41.289274 kubelet[3434]: I0910 23:49:41.288710 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5m6\" (UniqueName: \"kubernetes.io/projected/438f04cb-6d76-4edd-b685-70d4458bfffe-kube-api-access-tf5m6\") pod \"kube-proxy-g7kkk\" (UID: \"438f04cb-6d76-4edd-b685-70d4458bfffe\") " pod="kube-system/kube-proxy-g7kkk" Sep 10 23:49:41.289274 kubelet[3434]: I0910 23:49:41.288752 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-run\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.289274 kubelet[3434]: I0910 23:49:41.289114 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-bpf-maps\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.289769 kubelet[3434]: I0910 23:49:41.289723 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-cgroup\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.289863 kubelet[3434]: I0910 23:49:41.289826 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cni-path\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.290034 kubelet[3434]: I0910 23:49:41.289938 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-etc-cni-netd\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.290101 kubelet[3434]: I0910 23:49:41.290080 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-xtables-lock\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.290246 kubelet[3434]: I0910 23:49:41.290205 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-config-path\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.290959 kubelet[3434]: I0910 23:49:41.290905 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-kernel\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.291134 kubelet[3434]: I0910 23:49:41.291095 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hubble-tls\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.292241 kubelet[3434]: I0910 23:49:41.291183 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6pp\" (UniqueName: \"kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-kube-api-access-dt6pp\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.292241 kubelet[3434]: I0910 23:49:41.291364 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/438f04cb-6d76-4edd-b685-70d4458bfffe-lib-modules\") pod \"kube-proxy-g7kkk\" (UID: \"438f04cb-6d76-4edd-b685-70d4458bfffe\") " pod="kube-system/kube-proxy-g7kkk" Sep 10 23:49:41.292241 kubelet[3434]: I0910 23:49:41.291896 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-net\") pod \"cilium-lmhqr\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " pod="kube-system/cilium-lmhqr" Sep 10 23:49:41.348279 systemd[1]: Created slice kubepods-besteffort-pode8084be0_04a9_4c71_a470_d071b9464654.slice - libcontainer container kubepods-besteffort-pode8084be0_04a9_4c71_a470_d071b9464654.slice. Sep 10 23:49:41.394776 kubelet[3434]: I0910 23:49:41.392315 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2w7\" (UniqueName: \"kubernetes.io/projected/e8084be0-04a9-4c71-a470-d071b9464654-kube-api-access-4j2w7\") pod \"cilium-operator-6c4d7847fc-jgqbj\" (UID: \"e8084be0-04a9-4c71-a470-d071b9464654\") " pod="kube-system/cilium-operator-6c4d7847fc-jgqbj" Sep 10 23:49:41.394961 kubelet[3434]: I0910 23:49:41.394920 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8084be0-04a9-4c71-a470-d071b9464654-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jgqbj\" (UID: \"e8084be0-04a9-4c71-a470-d071b9464654\") " pod="kube-system/cilium-operator-6c4d7847fc-jgqbj" Sep 10 23:49:41.528037 containerd[1899]: time="2025-09-10T23:49:41.527423249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7kkk,Uid:438f04cb-6d76-4edd-b685-70d4458bfffe,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:41.558130 containerd[1899]: time="2025-09-10T23:49:41.557719697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmhqr,Uid:4ceb51d9-ff1e-4c52-ae2e-5bad8350c826,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:41.569121 containerd[1899]: time="2025-09-10T23:49:41.568993241Z" level=info msg="connecting to shim a79089aae4dd3534626d97ac3f922e61c80715697cbec660bccd8ff7623acb55" address="unix:///run/containerd/s/449e0bbe33a1ff79d7d453e3fad43bf54aeee5090d739064c86545e5ea997297" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:41.611859 containerd[1899]: time="2025-09-10T23:49:41.609508217Z" level=info msg="connecting to shim e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8" address="unix:///run/containerd/s/d8cc30af26d44dc8c26208ee05361faff3705a96375ea9b8ae8622534b153e16" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:41.617037 systemd[1]: Started cri-containerd-a79089aae4dd3534626d97ac3f922e61c80715697cbec660bccd8ff7623acb55.scope - libcontainer container a79089aae4dd3534626d97ac3f922e61c80715697cbec660bccd8ff7623acb55. Sep 10 23:49:41.662122 systemd[1]: Started cri-containerd-e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8.scope - libcontainer container e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8. Sep 10 23:49:41.672606 containerd[1899]: time="2025-09-10T23:49:41.672197670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jgqbj,Uid:e8084be0-04a9-4c71-a470-d071b9464654,Namespace:kube-system,Attempt:0,}" Sep 10 23:49:41.700183 containerd[1899]: time="2025-09-10T23:49:41.700125810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7kkk,Uid:438f04cb-6d76-4edd-b685-70d4458bfffe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a79089aae4dd3534626d97ac3f922e61c80715697cbec660bccd8ff7623acb55\"" Sep 10 23:49:41.712346 containerd[1899]: time="2025-09-10T23:49:41.712246422Z" level=info msg="CreateContainer within sandbox \"a79089aae4dd3534626d97ac3f922e61c80715697cbec660bccd8ff7623acb55\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:49:41.741875 containerd[1899]: time="2025-09-10T23:49:41.741815682Z" level=info msg="connecting to shim 46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8" address="unix:///run/containerd/s/fcdce20e5d5f6fdd9354ad3ee83389d7d20404fe965108f2b4d0318ae2e9f08a" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:49:41.753242 containerd[1899]: time="2025-09-10T23:49:41.753173610Z" level=info msg="Container f196aa4188cf2529e079801369fb85ee2ca762007b6e927ac7ebdc3cde4d7a34: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:41.773073 containerd[1899]: time="2025-09-10T23:49:41.773024694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmhqr,Uid:4ceb51d9-ff1e-4c52-ae2e-5bad8350c826,Namespace:kube-system,Attempt:0,} returns sandbox id \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\"" Sep 10 23:49:41.782747 containerd[1899]: time="2025-09-10T23:49:41.781914846Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 23:49:41.786537 containerd[1899]: time="2025-09-10T23:49:41.786284094Z" level=info msg="CreateContainer within sandbox \"a79089aae4dd3534626d97ac3f922e61c80715697cbec660bccd8ff7623acb55\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f196aa4188cf2529e079801369fb85ee2ca762007b6e927ac7ebdc3cde4d7a34\"" Sep 10 23:49:41.789530 containerd[1899]: time="2025-09-10T23:49:41.789471162Z" level=info msg="StartContainer for \"f196aa4188cf2529e079801369fb85ee2ca762007b6e927ac7ebdc3cde4d7a34\"" Sep 10 23:49:41.799288 containerd[1899]: time="2025-09-10T23:49:41.798644802Z" level=info msg="connecting to shim f196aa4188cf2529e079801369fb85ee2ca762007b6e927ac7ebdc3cde4d7a34" address="unix:///run/containerd/s/449e0bbe33a1ff79d7d453e3fad43bf54aeee5090d739064c86545e5ea997297" protocol=ttrpc version=3 Sep 10 23:49:41.813113 systemd[1]: Started cri-containerd-46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8.scope - libcontainer container 46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8. Sep 10 23:49:41.850976 systemd[1]: Started cri-containerd-f196aa4188cf2529e079801369fb85ee2ca762007b6e927ac7ebdc3cde4d7a34.scope - libcontainer container f196aa4188cf2529e079801369fb85ee2ca762007b6e927ac7ebdc3cde4d7a34. Sep 10 23:49:41.947799 containerd[1899]: time="2025-09-10T23:49:41.947672971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jgqbj,Uid:e8084be0-04a9-4c71-a470-d071b9464654,Namespace:kube-system,Attempt:0,} returns sandbox id \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\"" Sep 10 23:49:41.999871 containerd[1899]: time="2025-09-10T23:49:41.999784123Z" level=info msg="StartContainer for \"f196aa4188cf2529e079801369fb85ee2ca762007b6e927ac7ebdc3cde4d7a34\" returns successfully" Sep 10 23:49:42.453132 kubelet[3434]: I0910 23:49:42.452505 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g7kkk" podStartSLOduration=1.452483658 podStartE2EDuration="1.452483658s" podCreationTimestamp="2025-09-10 23:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:49:42.452325666 +0000 UTC m=+6.450926709" watchObservedRunningTime="2025-09-10 23:49:42.452483658 +0000 UTC m=+6.451084677" Sep 10 23:49:53.082853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436488250.mount: Deactivated successfully. Sep 10 23:49:55.700719 containerd[1899]: time="2025-09-10T23:49:55.699928867Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:55.702852 containerd[1899]: time="2025-09-10T23:49:55.702810031Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 10 23:49:55.705286 containerd[1899]: time="2025-09-10T23:49:55.705248023Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:49:55.708057 containerd[1899]: time="2025-09-10T23:49:55.707992999Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.923761517s" Sep 10 23:49:55.708201 containerd[1899]: time="2025-09-10T23:49:55.708054739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 10 23:49:55.711386 containerd[1899]: time="2025-09-10T23:49:55.711061075Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 23:49:55.716974 containerd[1899]: time="2025-09-10T23:49:55.716914772Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:49:55.753716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478336973.mount: Deactivated successfully. Sep 10 23:49:55.763896 containerd[1899]: time="2025-09-10T23:49:55.763824572Z" level=info msg="Container 164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:55.768056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342109219.mount: Deactivated successfully. Sep 10 23:49:55.779224 containerd[1899]: time="2025-09-10T23:49:55.779155916Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\"" Sep 10 23:49:55.780515 containerd[1899]: time="2025-09-10T23:49:55.780206624Z" level=info msg="StartContainer for \"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\"" Sep 10 23:49:55.782964 containerd[1899]: time="2025-09-10T23:49:55.782913020Z" level=info msg="connecting to shim 164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941" address="unix:///run/containerd/s/d8cc30af26d44dc8c26208ee05361faff3705a96375ea9b8ae8622534b153e16" protocol=ttrpc version=3 Sep 10 23:49:55.825007 systemd[1]: Started cri-containerd-164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941.scope - libcontainer container 164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941. Sep 10 23:49:55.887933 containerd[1899]: time="2025-09-10T23:49:55.887880284Z" level=info msg="StartContainer for \"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\" returns successfully" Sep 10 23:49:55.914431 systemd[1]: cri-containerd-164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941.scope: Deactivated successfully. Sep 10 23:49:55.915031 systemd[1]: cri-containerd-164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941.scope: Consumed 45ms CPU time, 6.5M memory peak, 2.1M written to disk. Sep 10 23:49:55.922469 containerd[1899]: time="2025-09-10T23:49:55.922410549Z" level=info msg="received exit event container_id:\"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\" id:\"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\" pid:3857 exited_at:{seconds:1757548195 nanos:921536997}" Sep 10 23:49:55.923581 containerd[1899]: time="2025-09-10T23:49:55.923404521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\" id:\"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\" pid:3857 exited_at:{seconds:1757548195 nanos:921536997}" Sep 10 23:49:56.741912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941-rootfs.mount: Deactivated successfully. Sep 10 23:49:57.480756 containerd[1899]: time="2025-09-10T23:49:57.479817092Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:49:57.500504 containerd[1899]: time="2025-09-10T23:49:57.499621988Z" level=info msg="Container 97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:57.524582 containerd[1899]: time="2025-09-10T23:49:57.523332836Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\"" Sep 10 23:49:57.530247 containerd[1899]: time="2025-09-10T23:49:57.530005941Z" level=info msg="StartContainer for \"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\"" Sep 10 23:49:57.535269 containerd[1899]: time="2025-09-10T23:49:57.535214613Z" level=info msg="connecting to shim 97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e" address="unix:///run/containerd/s/d8cc30af26d44dc8c26208ee05361faff3705a96375ea9b8ae8622534b153e16" protocol=ttrpc version=3 Sep 10 23:49:57.573998 systemd[1]: Started cri-containerd-97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e.scope - libcontainer container 97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e. Sep 10 23:49:57.636032 containerd[1899]: time="2025-09-10T23:49:57.635977677Z" level=info msg="StartContainer for \"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\" returns successfully" Sep 10 23:49:57.663453 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:49:57.664468 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:49:57.665161 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:49:57.669568 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:49:57.674352 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:49:57.680332 systemd[1]: cri-containerd-97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e.scope: Deactivated successfully. Sep 10 23:49:57.681235 containerd[1899]: time="2025-09-10T23:49:57.680860569Z" level=info msg="received exit event container_id:\"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\" id:\"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\" pid:3904 exited_at:{seconds:1757548197 nanos:679573257}" Sep 10 23:49:57.681310 containerd[1899]: time="2025-09-10T23:49:57.681283737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\" id:\"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\" pid:3904 exited_at:{seconds:1757548197 nanos:679573257}" Sep 10 23:49:57.718791 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:49:57.743146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e-rootfs.mount: Deactivated successfully. Sep 10 23:49:58.482231 containerd[1899]: time="2025-09-10T23:49:58.482093709Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:49:58.519615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484365293.mount: Deactivated successfully. Sep 10 23:49:58.522776 containerd[1899]: time="2025-09-10T23:49:58.520986657Z" level=info msg="Container b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:58.548269 containerd[1899]: time="2025-09-10T23:49:58.548168854Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\"" Sep 10 23:49:58.549481 containerd[1899]: time="2025-09-10T23:49:58.549309298Z" level=info msg="StartContainer for \"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\"" Sep 10 23:49:58.555648 containerd[1899]: time="2025-09-10T23:49:58.555465286Z" level=info msg="connecting to shim b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911" address="unix:///run/containerd/s/d8cc30af26d44dc8c26208ee05361faff3705a96375ea9b8ae8622534b153e16" protocol=ttrpc version=3 Sep 10 23:49:58.592013 systemd[1]: Started cri-containerd-b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911.scope - libcontainer container b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911. Sep 10 23:49:58.680188 containerd[1899]: time="2025-09-10T23:49:58.680103094Z" level=info msg="StartContainer for \"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\" returns successfully" Sep 10 23:49:58.686870 systemd[1]: cri-containerd-b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911.scope: Deactivated successfully. Sep 10 23:49:58.693952 containerd[1899]: time="2025-09-10T23:49:58.693767002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\" id:\"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\" pid:3954 exited_at:{seconds:1757548198 nanos:692169262}" Sep 10 23:49:58.694394 containerd[1899]: time="2025-09-10T23:49:58.694295578Z" level=info msg="received exit event container_id:\"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\" id:\"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\" pid:3954 exited_at:{seconds:1757548198 nanos:692169262}" Sep 10 23:49:58.738617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911-rootfs.mount: Deactivated successfully. Sep 10 23:49:58.768987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930664383.mount: Deactivated successfully. Sep 10 23:49:59.493572 containerd[1899]: time="2025-09-10T23:49:59.493435066Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:49:59.520655 containerd[1899]: time="2025-09-10T23:49:59.520529434Z" level=info msg="Container c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:49:59.527402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354893420.mount: Deactivated successfully. Sep 10 23:49:59.559349 containerd[1899]: time="2025-09-10T23:49:59.559270415Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\"" Sep 10 23:49:59.560591 containerd[1899]: time="2025-09-10T23:49:59.560375387Z" level=info msg="StartContainer for \"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\"" Sep 10 23:49:59.563546 containerd[1899]: time="2025-09-10T23:49:59.563431235Z" level=info msg="connecting to shim c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4" address="unix:///run/containerd/s/d8cc30af26d44dc8c26208ee05361faff3705a96375ea9b8ae8622534b153e16" protocol=ttrpc version=3 Sep 10 23:49:59.605048 systemd[1]: Started cri-containerd-c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4.scope - libcontainer container c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4. Sep 10 23:49:59.662999 systemd[1]: cri-containerd-c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4.scope: Deactivated successfully. Sep 10 23:49:59.671086 containerd[1899]: time="2025-09-10T23:49:59.671028467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\" id:\"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\" pid:4001 exited_at:{seconds:1757548199 nanos:669008183}" Sep 10 23:49:59.672496 containerd[1899]: time="2025-09-10T23:49:59.672437615Z" level=info msg="received exit event container_id:\"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\" id:\"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\" pid:4001 exited_at:{seconds:1757548199 nanos:669008183}" Sep 10 23:49:59.689586 containerd[1899]: time="2025-09-10T23:49:59.689533463Z" level=info msg="StartContainer for \"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\" returns successfully" Sep 10 23:49:59.752354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4-rootfs.mount: Deactivated successfully. Sep 10 23:50:00.509934 containerd[1899]: time="2025-09-10T23:50:00.508539911Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:50:00.556564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420395859.mount: Deactivated successfully. Sep 10 23:50:00.557119 containerd[1899]: time="2025-09-10T23:50:00.556839972Z" level=info msg="Container 2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:00.584511 containerd[1899]: time="2025-09-10T23:50:00.584443896Z" level=info msg="CreateContainer within sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\"" Sep 10 23:50:00.587008 containerd[1899]: time="2025-09-10T23:50:00.586873368Z" level=info msg="StartContainer for \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\"" Sep 10 23:50:00.592581 containerd[1899]: time="2025-09-10T23:50:00.592509084Z" level=info msg="connecting to shim 2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff" address="unix:///run/containerd/s/d8cc30af26d44dc8c26208ee05361faff3705a96375ea9b8ae8622534b153e16" protocol=ttrpc version=3 Sep 10 23:50:00.651191 systemd[1]: Started cri-containerd-2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff.scope - libcontainer container 2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff. Sep 10 23:50:00.770253 containerd[1899]: time="2025-09-10T23:50:00.770099677Z" level=info msg="StartContainer for \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" returns successfully" Sep 10 23:50:01.055815 containerd[1899]: time="2025-09-10T23:50:01.054503674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" id:\"3a95f3b74c1cd50afde41597fdaf9d1c576816b133917795a1b0a1f3bd2540ed\" pid:4075 exited_at:{seconds:1757548201 nanos:53223118}" Sep 10 23:50:01.104340 kubelet[3434]: I0910 23:50:01.104011 3434 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:50:01.230338 systemd[1]: Created slice kubepods-burstable-pod0d4988bd_a56b_4621_bed8_049888250a59.slice - libcontainer container kubepods-burstable-pod0d4988bd_a56b_4621_bed8_049888250a59.slice. Sep 10 23:50:01.248958 systemd[1]: Created slice kubepods-burstable-podf814a6c1_6b42_42c8_89c5_17011dd0ee67.slice - libcontainer container kubepods-burstable-podf814a6c1_6b42_42c8_89c5_17011dd0ee67.slice. Sep 10 23:50:01.249596 kubelet[3434]: I0910 23:50:01.249402 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d4988bd-a56b-4621-bed8-049888250a59-config-volume\") pod \"coredns-674b8bbfcf-jmdhh\" (UID: \"0d4988bd-a56b-4621-bed8-049888250a59\") " pod="kube-system/coredns-674b8bbfcf-jmdhh" Sep 10 23:50:01.249596 kubelet[3434]: I0910 23:50:01.249454 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f814a6c1-6b42-42c8-89c5-17011dd0ee67-config-volume\") pod \"coredns-674b8bbfcf-blcwt\" (UID: \"f814a6c1-6b42-42c8-89c5-17011dd0ee67\") " pod="kube-system/coredns-674b8bbfcf-blcwt" Sep 10 23:50:01.249596 kubelet[3434]: I0910 23:50:01.249508 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8rsh\" (UniqueName: \"kubernetes.io/projected/0d4988bd-a56b-4621-bed8-049888250a59-kube-api-access-r8rsh\") pod \"coredns-674b8bbfcf-jmdhh\" (UID: \"0d4988bd-a56b-4621-bed8-049888250a59\") " pod="kube-system/coredns-674b8bbfcf-jmdhh" Sep 10 23:50:01.249596 kubelet[3434]: I0910 23:50:01.249547 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5vd\" (UniqueName: \"kubernetes.io/projected/f814a6c1-6b42-42c8-89c5-17011dd0ee67-kube-api-access-7x5vd\") pod \"coredns-674b8bbfcf-blcwt\" (UID: \"f814a6c1-6b42-42c8-89c5-17011dd0ee67\") " pod="kube-system/coredns-674b8bbfcf-blcwt" Sep 10 23:50:01.572244 containerd[1899]: time="2025-09-10T23:50:01.569255593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jmdhh,Uid:0d4988bd-a56b-4621-bed8-049888250a59,Namespace:kube-system,Attempt:0,}" Sep 10 23:50:01.595151 containerd[1899]: time="2025-09-10T23:50:01.593980981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-blcwt,Uid:f814a6c1-6b42-42c8-89c5-17011dd0ee67,Namespace:kube-system,Attempt:0,}" Sep 10 23:50:01.724540 containerd[1899]: time="2025-09-10T23:50:01.724455793Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:01.727197 containerd[1899]: time="2025-09-10T23:50:01.727153609Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 10 23:50:01.732024 containerd[1899]: time="2025-09-10T23:50:01.731971105Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:50:01.739742 containerd[1899]: time="2025-09-10T23:50:01.739637317Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.02852283s" Sep 10 23:50:01.739947 containerd[1899]: time="2025-09-10T23:50:01.739917121Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 10 23:50:01.751651 containerd[1899]: time="2025-09-10T23:50:01.751574557Z" level=info msg="CreateContainer within sandbox \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 23:50:01.774714 containerd[1899]: time="2025-09-10T23:50:01.772846322Z" level=info msg="Container ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:01.790221 containerd[1899]: time="2025-09-10T23:50:01.790165046Z" level=info msg="CreateContainer within sandbox \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\"" Sep 10 23:50:01.792454 containerd[1899]: time="2025-09-10T23:50:01.792293042Z" level=info msg="StartContainer for \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\"" Sep 10 23:50:01.796904 containerd[1899]: time="2025-09-10T23:50:01.796597982Z" level=info msg="connecting to shim ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2" address="unix:///run/containerd/s/fcdce20e5d5f6fdd9354ad3ee83389d7d20404fe965108f2b4d0318ae2e9f08a" protocol=ttrpc version=3 Sep 10 23:50:01.853183 systemd[1]: Started cri-containerd-ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2.scope - libcontainer container ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2. Sep 10 23:50:01.947090 containerd[1899]: time="2025-09-10T23:50:01.947026154Z" level=info msg="StartContainer for \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" returns successfully" Sep 10 23:50:02.391143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020703743.mount: Deactivated successfully. Sep 10 23:50:02.589945 kubelet[3434]: I0910 23:50:02.589848 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmhqr" podStartSLOduration=7.658881877 podStartE2EDuration="21.589821074s" podCreationTimestamp="2025-09-10 23:49:41 +0000 UTC" firstStartedPulling="2025-09-10 23:49:41.779063898 +0000 UTC m=+5.777664917" lastFinishedPulling="2025-09-10 23:49:55.710003107 +0000 UTC m=+19.708604114" observedRunningTime="2025-09-10 23:50:01.634410349 +0000 UTC m=+25.633011368" watchObservedRunningTime="2025-09-10 23:50:02.589821074 +0000 UTC m=+26.588422081" Sep 10 23:50:02.590527 kubelet[3434]: I0910 23:50:02.590128 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jgqbj" podStartSLOduration=1.799707884 podStartE2EDuration="21.590116694s" podCreationTimestamp="2025-09-10 23:49:41 +0000 UTC" firstStartedPulling="2025-09-10 23:49:41.950958703 +0000 UTC m=+5.949559710" lastFinishedPulling="2025-09-10 23:50:01.741367525 +0000 UTC m=+25.739968520" observedRunningTime="2025-09-10 23:50:02.588212642 +0000 UTC m=+26.586813661" watchObservedRunningTime="2025-09-10 23:50:02.590116694 +0000 UTC m=+26.588717725" Sep 10 23:50:06.437065 (udev-worker)[4210]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:50:06.443648 systemd-networkd[1819]: cilium_host: Link UP Sep 10 23:50:06.447666 (udev-worker)[4211]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:50:06.447989 systemd-networkd[1819]: cilium_net: Link UP Sep 10 23:50:06.448443 systemd-networkd[1819]: cilium_net: Gained carrier Sep 10 23:50:06.455562 systemd-networkd[1819]: cilium_host: Gained carrier Sep 10 23:50:06.536019 systemd-networkd[1819]: cilium_net: Gained IPv6LL Sep 10 23:50:06.644800 systemd-networkd[1819]: cilium_vxlan: Link UP Sep 10 23:50:06.644814 systemd-networkd[1819]: cilium_vxlan: Gained carrier Sep 10 23:50:06.896219 systemd-networkd[1819]: cilium_host: Gained IPv6LL Sep 10 23:50:07.210750 kernel: NET: Registered PF_ALG protocol family Sep 10 23:50:08.556870 systemd-networkd[1819]: lxc_health: Link UP Sep 10 23:50:08.563074 (udev-worker)[4218]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:50:08.566325 systemd-networkd[1819]: lxc_health: Gained carrier Sep 10 23:50:08.601358 systemd-networkd[1819]: cilium_vxlan: Gained IPv6LL Sep 10 23:50:09.215857 kernel: eth0: renamed from tmp14e07 Sep 10 23:50:09.214811 (udev-worker)[4538]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:50:09.218845 systemd-networkd[1819]: lxc598fbace9484: Link UP Sep 10 23:50:09.229392 systemd-networkd[1819]: lxc598fbace9484: Gained carrier Sep 10 23:50:09.251189 kernel: eth0: renamed from tmp0d2fd Sep 10 23:50:09.252592 systemd-networkd[1819]: lxc4b84fe0bd637: Link UP Sep 10 23:50:09.259876 systemd-networkd[1819]: lxc4b84fe0bd637: Gained carrier Sep 10 23:50:09.944000 systemd-networkd[1819]: lxc_health: Gained IPv6LL Sep 10 23:50:10.904064 systemd-networkd[1819]: lxc4b84fe0bd637: Gained IPv6LL Sep 10 23:50:11.288005 systemd-networkd[1819]: lxc598fbace9484: Gained IPv6LL Sep 10 23:50:14.094123 ntpd[1857]: Listen normally on 8 cilium_host 192.168.0.6:123 Sep 10 23:50:14.095129 ntpd[1857]: 10 Sep 23:50:14 ntpd[1857]: Listen normally on 8 cilium_host 192.168.0.6:123 Sep 10 23:50:14.095129 ntpd[1857]: 10 Sep 23:50:14 ntpd[1857]: Listen normally on 9 cilium_net [fe80::1431:53ff:fe57:d396%4]:123 Sep 10 23:50:14.095129 ntpd[1857]: 10 Sep 23:50:14 ntpd[1857]: Listen normally on 10 cilium_host [fe80::85a:a5ff:fe84:b286%5]:123 Sep 10 23:50:14.095129 ntpd[1857]: 10 Sep 23:50:14 ntpd[1857]: Listen normally on 11 cilium_vxlan [fe80::5865:5fff:fe29:d750%6]:123 Sep 10 23:50:14.095129 ntpd[1857]: 10 Sep 23:50:14 ntpd[1857]: Listen normally on 12 lxc_health [fe80::b83a:90ff:febb:cbbf%8]:123 Sep 10 23:50:14.095129 ntpd[1857]: 10 Sep 23:50:14 ntpd[1857]: Listen normally on 13 lxc598fbace9484 [fe80::c55:1bff:fe27:e52d%10]:123 Sep 10 23:50:14.095129 ntpd[1857]: 10 Sep 23:50:14 ntpd[1857]: Listen normally on 14 lxc4b84fe0bd637 [fe80::9c58:aff:fe6c:ed23%12]:123 Sep 10 23:50:14.094243 ntpd[1857]: Listen normally on 9 cilium_net [fe80::1431:53ff:fe57:d396%4]:123 Sep 10 23:50:14.094319 ntpd[1857]: Listen normally on 10 cilium_host [fe80::85a:a5ff:fe84:b286%5]:123 Sep 10 23:50:14.094383 ntpd[1857]: Listen normally on 11 cilium_vxlan [fe80::5865:5fff:fe29:d750%6]:123 Sep 10 23:50:14.094445 ntpd[1857]: Listen normally on 12 lxc_health [fe80::b83a:90ff:febb:cbbf%8]:123 Sep 10 23:50:14.094507 ntpd[1857]: Listen normally on 13 lxc598fbace9484 [fe80::c55:1bff:fe27:e52d%10]:123 Sep 10 23:50:14.094573 ntpd[1857]: Listen normally on 14 lxc4b84fe0bd637 [fe80::9c58:aff:fe6c:ed23%12]:123 Sep 10 23:50:17.479611 containerd[1899]: time="2025-09-10T23:50:17.477925384Z" level=info msg="connecting to shim 0d2fde2794037c20e53d39609e661c078f3b6f9007b6756915c8709df8d05cdb" address="unix:///run/containerd/s/beaf6deb248f7e6ff1c4ea77a3cf9bd8f7f93d830ad36baf15db2788ddba99c1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:17.521024 containerd[1899]: time="2025-09-10T23:50:17.520894072Z" level=info msg="connecting to shim 14e07b414e1d84da6b99db37e60573db9960b8a8ab3e613efdf73400e86a2e25" address="unix:///run/containerd/s/5d816741d6ebb0648c408adf08647a5c229b48704be070335974b6f2f58a80d3" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:50:17.581139 systemd[1]: Started cri-containerd-0d2fde2794037c20e53d39609e661c078f3b6f9007b6756915c8709df8d05cdb.scope - libcontainer container 0d2fde2794037c20e53d39609e661c078f3b6f9007b6756915c8709df8d05cdb. Sep 10 23:50:17.599121 systemd[1]: Started cri-containerd-14e07b414e1d84da6b99db37e60573db9960b8a8ab3e613efdf73400e86a2e25.scope - libcontainer container 14e07b414e1d84da6b99db37e60573db9960b8a8ab3e613efdf73400e86a2e25. Sep 10 23:50:17.735661 containerd[1899]: time="2025-09-10T23:50:17.735394049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jmdhh,Uid:0d4988bd-a56b-4621-bed8-049888250a59,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2fde2794037c20e53d39609e661c078f3b6f9007b6756915c8709df8d05cdb\"" Sep 10 23:50:17.756380 containerd[1899]: time="2025-09-10T23:50:17.756255401Z" level=info msg="CreateContainer within sandbox \"0d2fde2794037c20e53d39609e661c078f3b6f9007b6756915c8709df8d05cdb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:50:17.756656 containerd[1899]: time="2025-09-10T23:50:17.756605777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-blcwt,Uid:f814a6c1-6b42-42c8-89c5-17011dd0ee67,Namespace:kube-system,Attempt:0,} returns sandbox id \"14e07b414e1d84da6b99db37e60573db9960b8a8ab3e613efdf73400e86a2e25\"" Sep 10 23:50:17.769019 containerd[1899]: time="2025-09-10T23:50:17.768051125Z" level=info msg="CreateContainer within sandbox \"14e07b414e1d84da6b99db37e60573db9960b8a8ab3e613efdf73400e86a2e25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:50:17.789729 containerd[1899]: time="2025-09-10T23:50:17.789637637Z" level=info msg="Container d7279e3e4f8c45da90ff2f46efe14fe75b9d1153674a4b53e3dcd5097e124db3: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:17.797717 containerd[1899]: time="2025-09-10T23:50:17.797639621Z" level=info msg="Container ea5ae49257ca2496004c5ae195c7114348748c6c57a6c43e81038aca8e0cc47e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:50:17.806381 containerd[1899]: time="2025-09-10T23:50:17.806167889Z" level=info msg="CreateContainer within sandbox \"0d2fde2794037c20e53d39609e661c078f3b6f9007b6756915c8709df8d05cdb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7279e3e4f8c45da90ff2f46efe14fe75b9d1153674a4b53e3dcd5097e124db3\"" Sep 10 23:50:17.809289 containerd[1899]: time="2025-09-10T23:50:17.808944149Z" level=info msg="StartContainer for \"d7279e3e4f8c45da90ff2f46efe14fe75b9d1153674a4b53e3dcd5097e124db3\"" Sep 10 23:50:17.812926 containerd[1899]: time="2025-09-10T23:50:17.812781221Z" level=info msg="connecting to shim d7279e3e4f8c45da90ff2f46efe14fe75b9d1153674a4b53e3dcd5097e124db3" address="unix:///run/containerd/s/beaf6deb248f7e6ff1c4ea77a3cf9bd8f7f93d830ad36baf15db2788ddba99c1" protocol=ttrpc version=3 Sep 10 23:50:17.820352 containerd[1899]: time="2025-09-10T23:50:17.820227665Z" level=info msg="CreateContainer within sandbox \"14e07b414e1d84da6b99db37e60573db9960b8a8ab3e613efdf73400e86a2e25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea5ae49257ca2496004c5ae195c7114348748c6c57a6c43e81038aca8e0cc47e\"" Sep 10 23:50:17.822167 containerd[1899]: time="2025-09-10T23:50:17.821994281Z" level=info msg="StartContainer for \"ea5ae49257ca2496004c5ae195c7114348748c6c57a6c43e81038aca8e0cc47e\"" Sep 10 23:50:17.827763 containerd[1899]: time="2025-09-10T23:50:17.827679785Z" level=info msg="connecting to shim ea5ae49257ca2496004c5ae195c7114348748c6c57a6c43e81038aca8e0cc47e" address="unix:///run/containerd/s/5d816741d6ebb0648c408adf08647a5c229b48704be070335974b6f2f58a80d3" protocol=ttrpc version=3 Sep 10 23:50:17.859006 systemd[1]: Started cri-containerd-d7279e3e4f8c45da90ff2f46efe14fe75b9d1153674a4b53e3dcd5097e124db3.scope - libcontainer container d7279e3e4f8c45da90ff2f46efe14fe75b9d1153674a4b53e3dcd5097e124db3. Sep 10 23:50:17.875005 systemd[1]: Started cri-containerd-ea5ae49257ca2496004c5ae195c7114348748c6c57a6c43e81038aca8e0cc47e.scope - libcontainer container ea5ae49257ca2496004c5ae195c7114348748c6c57a6c43e81038aca8e0cc47e. Sep 10 23:50:18.024893 containerd[1899]: time="2025-09-10T23:50:18.023734454Z" level=info msg="StartContainer for \"ea5ae49257ca2496004c5ae195c7114348748c6c57a6c43e81038aca8e0cc47e\" returns successfully" Sep 10 23:50:18.039301 containerd[1899]: time="2025-09-10T23:50:18.038479190Z" level=info msg="StartContainer for \"d7279e3e4f8c45da90ff2f46efe14fe75b9d1153674a4b53e3dcd5097e124db3\" returns successfully" Sep 10 23:50:18.287104 systemd[1]: Started sshd@9-172.31.30.159:22-139.178.68.195:33878.service - OpenSSH per-connection server daemon (139.178.68.195:33878). Sep 10 23:50:18.485840 sshd[4740]: Accepted publickey for core from 139.178.68.195 port 33878 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:18.488839 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:18.497077 systemd-logind[1863]: New session 10 of user core. Sep 10 23:50:18.506972 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:50:18.630848 kubelet[3434]: I0910 23:50:18.630058 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-blcwt" podStartSLOduration=37.630032285 podStartE2EDuration="37.630032285s" podCreationTimestamp="2025-09-10 23:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:50:18.623931557 +0000 UTC m=+42.622532576" watchObservedRunningTime="2025-09-10 23:50:18.630032285 +0000 UTC m=+42.628633304" Sep 10 23:50:18.674703 kubelet[3434]: I0910 23:50:18.674380 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jmdhh" podStartSLOduration=37.674326122 podStartE2EDuration="37.674326122s" podCreationTimestamp="2025-09-10 23:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:50:18.669233106 +0000 UTC m=+42.667834149" watchObservedRunningTime="2025-09-10 23:50:18.674326122 +0000 UTC m=+42.672927141" Sep 10 23:50:18.869281 sshd[4742]: Connection closed by 139.178.68.195 port 33878 Sep 10 23:50:18.870308 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:18.878217 systemd[1]: sshd@9-172.31.30.159:22-139.178.68.195:33878.service: Deactivated successfully. Sep 10 23:50:18.882561 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:50:18.886837 systemd-logind[1863]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:50:18.889975 systemd-logind[1863]: Removed session 10. Sep 10 23:50:23.911373 systemd[1]: Started sshd@10-172.31.30.159:22-139.178.68.195:46056.service - OpenSSH per-connection server daemon (139.178.68.195:46056). Sep 10 23:50:24.119145 sshd[4763]: Accepted publickey for core from 139.178.68.195 port 46056 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:24.121641 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:24.130932 systemd-logind[1863]: New session 11 of user core. Sep 10 23:50:24.139996 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:50:24.379263 sshd[4765]: Connection closed by 139.178.68.195 port 46056 Sep 10 23:50:24.380137 sshd-session[4763]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:24.387260 systemd[1]: sshd@10-172.31.30.159:22-139.178.68.195:46056.service: Deactivated successfully. Sep 10 23:50:24.392726 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:50:24.395985 systemd-logind[1863]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:50:24.400056 systemd-logind[1863]: Removed session 11. Sep 10 23:50:29.426407 systemd[1]: Started sshd@11-172.31.30.159:22-139.178.68.195:46064.service - OpenSSH per-connection server daemon (139.178.68.195:46064). Sep 10 23:50:29.631160 sshd[4780]: Accepted publickey for core from 139.178.68.195 port 46064 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:29.636216 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:29.646946 systemd-logind[1863]: New session 12 of user core. Sep 10 23:50:29.654957 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:50:29.899712 sshd[4782]: Connection closed by 139.178.68.195 port 46064 Sep 10 23:50:29.898642 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:29.905143 systemd[1]: sshd@11-172.31.30.159:22-139.178.68.195:46064.service: Deactivated successfully. Sep 10 23:50:29.911025 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:50:29.912716 systemd-logind[1863]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:50:29.916605 systemd-logind[1863]: Removed session 12. Sep 10 23:50:34.937274 systemd[1]: Started sshd@12-172.31.30.159:22-139.178.68.195:53996.service - OpenSSH per-connection server daemon (139.178.68.195:53996). Sep 10 23:50:35.146397 sshd[4794]: Accepted publickey for core from 139.178.68.195 port 53996 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:35.148966 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:35.157258 systemd-logind[1863]: New session 13 of user core. Sep 10 23:50:35.166973 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:50:35.418734 sshd[4796]: Connection closed by 139.178.68.195 port 53996 Sep 10 23:50:35.419358 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:35.428231 systemd[1]: sshd@12-172.31.30.159:22-139.178.68.195:53996.service: Deactivated successfully. Sep 10 23:50:35.434800 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:50:35.437306 systemd-logind[1863]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:50:35.442062 systemd-logind[1863]: Removed session 13. Sep 10 23:50:40.467392 systemd[1]: Started sshd@13-172.31.30.159:22-139.178.68.195:48110.service - OpenSSH per-connection server daemon (139.178.68.195:48110). Sep 10 23:50:40.665012 sshd[4811]: Accepted publickey for core from 139.178.68.195 port 48110 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:40.668060 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:40.677678 systemd-logind[1863]: New session 14 of user core. Sep 10 23:50:40.685983 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:50:40.935383 sshd[4813]: Connection closed by 139.178.68.195 port 48110 Sep 10 23:50:40.935260 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:40.942367 systemd-logind[1863]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:50:40.943547 systemd[1]: sshd@13-172.31.30.159:22-139.178.68.195:48110.service: Deactivated successfully. Sep 10 23:50:40.948920 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:50:40.952620 systemd-logind[1863]: Removed session 14. Sep 10 23:50:40.971588 systemd[1]: Started sshd@14-172.31.30.159:22-139.178.68.195:48112.service - OpenSSH per-connection server daemon (139.178.68.195:48112). Sep 10 23:50:41.177863 sshd[4826]: Accepted publickey for core from 139.178.68.195 port 48112 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:41.180412 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:41.188797 systemd-logind[1863]: New session 15 of user core. Sep 10 23:50:41.197962 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:50:41.539802 sshd[4828]: Connection closed by 139.178.68.195 port 48112 Sep 10 23:50:41.538326 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:41.548339 systemd[1]: sshd@14-172.31.30.159:22-139.178.68.195:48112.service: Deactivated successfully. Sep 10 23:50:41.548656 systemd-logind[1863]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:50:41.557258 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:50:41.589678 systemd-logind[1863]: Removed session 15. Sep 10 23:50:41.597769 systemd[1]: Started sshd@15-172.31.30.159:22-139.178.68.195:48118.service - OpenSSH per-connection server daemon (139.178.68.195:48118). Sep 10 23:50:41.821752 sshd[4838]: Accepted publickey for core from 139.178.68.195 port 48118 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:41.823923 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:41.832227 systemd-logind[1863]: New session 16 of user core. Sep 10 23:50:41.841002 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:50:42.089232 sshd[4840]: Connection closed by 139.178.68.195 port 48118 Sep 10 23:50:42.090034 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:42.098256 systemd[1]: sshd@15-172.31.30.159:22-139.178.68.195:48118.service: Deactivated successfully. Sep 10 23:50:42.104137 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:50:42.106421 systemd-logind[1863]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:50:42.110179 systemd-logind[1863]: Removed session 16. Sep 10 23:50:47.137276 systemd[1]: Started sshd@16-172.31.30.159:22-139.178.68.195:48122.service - OpenSSH per-connection server daemon (139.178.68.195:48122). Sep 10 23:50:47.347895 sshd[4856]: Accepted publickey for core from 139.178.68.195 port 48122 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:47.350574 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:47.358832 systemd-logind[1863]: New session 17 of user core. Sep 10 23:50:47.365948 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:50:47.613562 sshd[4858]: Connection closed by 139.178.68.195 port 48122 Sep 10 23:50:47.614409 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:47.621798 systemd-logind[1863]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:50:47.622569 systemd[1]: sshd@16-172.31.30.159:22-139.178.68.195:48122.service: Deactivated successfully. Sep 10 23:50:47.626320 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:50:47.630971 systemd-logind[1863]: Removed session 17. Sep 10 23:50:52.655150 systemd[1]: Started sshd@17-172.31.30.159:22-139.178.68.195:54066.service - OpenSSH per-connection server daemon (139.178.68.195:54066). Sep 10 23:50:52.854610 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 54066 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:52.857117 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:52.865570 systemd-logind[1863]: New session 18 of user core. Sep 10 23:50:52.878975 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:50:53.126051 sshd[4872]: Connection closed by 139.178.68.195 port 54066 Sep 10 23:50:53.126520 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:53.136238 systemd[1]: sshd@17-172.31.30.159:22-139.178.68.195:54066.service: Deactivated successfully. Sep 10 23:50:53.140996 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:50:53.143255 systemd-logind[1863]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:50:53.147016 systemd-logind[1863]: Removed session 18. Sep 10 23:50:58.163462 systemd[1]: Started sshd@18-172.31.30.159:22-139.178.68.195:54080.service - OpenSSH per-connection server daemon (139.178.68.195:54080). Sep 10 23:50:58.360297 sshd[4884]: Accepted publickey for core from 139.178.68.195 port 54080 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:58.361899 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:58.370102 systemd-logind[1863]: New session 19 of user core. Sep 10 23:50:58.377942 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:50:58.617241 sshd[4886]: Connection closed by 139.178.68.195 port 54080 Sep 10 23:50:58.618118 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:58.625125 systemd[1]: sshd@18-172.31.30.159:22-139.178.68.195:54080.service: Deactivated successfully. Sep 10 23:50:58.630588 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:50:58.632780 systemd-logind[1863]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:50:58.635714 systemd-logind[1863]: Removed session 19. Sep 10 23:50:58.652674 systemd[1]: Started sshd@19-172.31.30.159:22-139.178.68.195:54086.service - OpenSSH per-connection server daemon (139.178.68.195:54086). Sep 10 23:50:58.858860 sshd[4898]: Accepted publickey for core from 139.178.68.195 port 54086 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:58.861915 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:58.869674 systemd-logind[1863]: New session 20 of user core. Sep 10 23:50:58.878991 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:50:59.205128 sshd[4900]: Connection closed by 139.178.68.195 port 54086 Sep 10 23:50:59.205812 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Sep 10 23:50:59.211371 systemd[1]: sshd@19-172.31.30.159:22-139.178.68.195:54086.service: Deactivated successfully. Sep 10 23:50:59.215806 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:50:59.221480 systemd-logind[1863]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:50:59.223647 systemd-logind[1863]: Removed session 20. Sep 10 23:50:59.245794 systemd[1]: Started sshd@20-172.31.30.159:22-139.178.68.195:54102.service - OpenSSH per-connection server daemon (139.178.68.195:54102). Sep 10 23:50:59.455027 sshd[4910]: Accepted publickey for core from 139.178.68.195 port 54102 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:50:59.457527 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:50:59.466156 systemd-logind[1863]: New session 21 of user core. Sep 10 23:50:59.487000 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:51:00.415086 sshd[4912]: Connection closed by 139.178.68.195 port 54102 Sep 10 23:51:00.417973 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:00.429551 systemd-logind[1863]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:51:00.430883 systemd[1]: sshd@20-172.31.30.159:22-139.178.68.195:54102.service: Deactivated successfully. Sep 10 23:51:00.440299 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:51:00.465961 systemd[1]: Started sshd@21-172.31.30.159:22-139.178.68.195:59550.service - OpenSSH per-connection server daemon (139.178.68.195:59550). Sep 10 23:51:00.466008 systemd-logind[1863]: Removed session 21. Sep 10 23:51:00.667491 sshd[4929]: Accepted publickey for core from 139.178.68.195 port 59550 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:00.670467 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:00.679841 systemd-logind[1863]: New session 22 of user core. Sep 10 23:51:00.684932 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 23:51:01.181515 sshd[4933]: Connection closed by 139.178.68.195 port 59550 Sep 10 23:51:01.182312 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:01.191520 systemd-logind[1863]: Session 22 logged out. Waiting for processes to exit. Sep 10 23:51:01.192957 systemd[1]: sshd@21-172.31.30.159:22-139.178.68.195:59550.service: Deactivated successfully. Sep 10 23:51:01.197327 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 23:51:01.202048 systemd-logind[1863]: Removed session 22. Sep 10 23:51:01.223107 systemd[1]: Started sshd@22-172.31.30.159:22-139.178.68.195:59560.service - OpenSSH per-connection server daemon (139.178.68.195:59560). Sep 10 23:51:01.429049 sshd[4943]: Accepted publickey for core from 139.178.68.195 port 59560 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:01.431846 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:01.440088 systemd-logind[1863]: New session 23 of user core. Sep 10 23:51:01.461956 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 23:51:01.698910 sshd[4945]: Connection closed by 139.178.68.195 port 59560 Sep 10 23:51:01.699834 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:01.706187 systemd-logind[1863]: Session 23 logged out. Waiting for processes to exit. Sep 10 23:51:01.706368 systemd[1]: sshd@22-172.31.30.159:22-139.178.68.195:59560.service: Deactivated successfully. Sep 10 23:51:01.711172 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 23:51:01.716994 systemd-logind[1863]: Removed session 23. Sep 10 23:51:06.742030 systemd[1]: Started sshd@23-172.31.30.159:22-139.178.68.195:59570.service - OpenSSH per-connection server daemon (139.178.68.195:59570). Sep 10 23:51:06.937081 sshd[4957]: Accepted publickey for core from 139.178.68.195 port 59570 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:06.939596 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:06.949155 systemd-logind[1863]: New session 24 of user core. Sep 10 23:51:06.955997 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 23:51:07.192903 sshd[4959]: Connection closed by 139.178.68.195 port 59570 Sep 10 23:51:07.193783 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:07.201202 systemd-logind[1863]: Session 24 logged out. Waiting for processes to exit. Sep 10 23:51:07.201936 systemd[1]: sshd@23-172.31.30.159:22-139.178.68.195:59570.service: Deactivated successfully. Sep 10 23:51:07.206623 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 23:51:07.210201 systemd-logind[1863]: Removed session 24. Sep 10 23:51:12.241875 systemd[1]: Started sshd@24-172.31.30.159:22-139.178.68.195:44670.service - OpenSSH per-connection server daemon (139.178.68.195:44670). Sep 10 23:51:12.444997 sshd[4974]: Accepted publickey for core from 139.178.68.195 port 44670 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:12.447570 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:12.456798 systemd-logind[1863]: New session 25 of user core. Sep 10 23:51:12.459977 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 23:51:12.699091 sshd[4978]: Connection closed by 139.178.68.195 port 44670 Sep 10 23:51:12.699540 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:12.705886 systemd-logind[1863]: Session 25 logged out. Waiting for processes to exit. Sep 10 23:51:12.706047 systemd[1]: sshd@24-172.31.30.159:22-139.178.68.195:44670.service: Deactivated successfully. Sep 10 23:51:12.710927 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 23:51:12.715944 systemd-logind[1863]: Removed session 25. Sep 10 23:51:17.741117 systemd[1]: Started sshd@25-172.31.30.159:22-139.178.68.195:44678.service - OpenSSH per-connection server daemon (139.178.68.195:44678). Sep 10 23:51:17.938074 sshd[4990]: Accepted publickey for core from 139.178.68.195 port 44678 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:17.940595 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:17.949826 systemd-logind[1863]: New session 26 of user core. Sep 10 23:51:17.960194 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 23:51:18.203255 sshd[4992]: Connection closed by 139.178.68.195 port 44678 Sep 10 23:51:18.202934 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:18.209471 systemd[1]: sshd@25-172.31.30.159:22-139.178.68.195:44678.service: Deactivated successfully. Sep 10 23:51:18.213059 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 23:51:18.216622 systemd-logind[1863]: Session 26 logged out. Waiting for processes to exit. Sep 10 23:51:18.219277 systemd-logind[1863]: Removed session 26. Sep 10 23:51:18.240968 systemd[1]: Started sshd@26-172.31.30.159:22-139.178.68.195:44692.service - OpenSSH per-connection server daemon (139.178.68.195:44692). Sep 10 23:51:18.445850 sshd[5004]: Accepted publickey for core from 139.178.68.195 port 44692 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:18.447482 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:18.455785 systemd-logind[1863]: New session 27 of user core. Sep 10 23:51:18.466948 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 23:51:20.471146 containerd[1899]: time="2025-09-10T23:51:20.470973544Z" level=info msg="StopContainer for \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" with timeout 30 (s)" Sep 10 23:51:20.472428 containerd[1899]: time="2025-09-10T23:51:20.472386569Z" level=info msg="Stop container \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" with signal terminated" Sep 10 23:51:20.503377 systemd[1]: cri-containerd-ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2.scope: Deactivated successfully. Sep 10 23:51:20.506632 containerd[1899]: time="2025-09-10T23:51:20.506525501Z" level=info msg="received exit event container_id:\"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" id:\"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" pid:4153 exited_at:{seconds:1757548280 nanos:506091341}" Sep 10 23:51:20.507954 containerd[1899]: time="2025-09-10T23:51:20.507196313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" id:\"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" pid:4153 exited_at:{seconds:1757548280 nanos:506091341}" Sep 10 23:51:20.507954 containerd[1899]: time="2025-09-10T23:51:20.507824861Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:51:20.522803 containerd[1899]: time="2025-09-10T23:51:20.522671273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" id:\"13aa2b85ca4957f49a328fe2aab65731fdc00aca2e536baa0b123b55239480ff\" pid:5028 exited_at:{seconds:1757548280 nanos:522012125}" Sep 10 23:51:20.528784 containerd[1899]: time="2025-09-10T23:51:20.528706529Z" level=info msg="StopContainer for \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" with timeout 2 (s)" Sep 10 23:51:20.530116 containerd[1899]: time="2025-09-10T23:51:20.530066813Z" level=info msg="Stop container \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" with signal terminated" Sep 10 23:51:20.552841 systemd-networkd[1819]: lxc_health: Link DOWN Sep 10 23:51:20.554015 systemd-networkd[1819]: lxc_health: Lost carrier Sep 10 23:51:20.586195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2-rootfs.mount: Deactivated successfully. Sep 10 23:51:20.598310 systemd[1]: cri-containerd-2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff.scope: Deactivated successfully. Sep 10 23:51:20.598920 systemd[1]: cri-containerd-2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff.scope: Consumed 14.181s CPU time, 125.5M memory peak, 120K read from disk, 12.9M written to disk. Sep 10 23:51:20.602259 containerd[1899]: time="2025-09-10T23:51:20.602185637Z" level=info msg="received exit event container_id:\"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" id:\"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" pid:4046 exited_at:{seconds:1757548280 nanos:601437293}" Sep 10 23:51:20.603739 containerd[1899]: time="2025-09-10T23:51:20.603563285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" id:\"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" pid:4046 exited_at:{seconds:1757548280 nanos:601437293}" Sep 10 23:51:20.625617 containerd[1899]: time="2025-09-10T23:51:20.625549685Z" level=info msg="StopContainer for \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" returns successfully" Sep 10 23:51:20.627882 containerd[1899]: time="2025-09-10T23:51:20.627789065Z" level=info msg="StopPodSandbox for \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\"" Sep 10 23:51:20.628025 containerd[1899]: time="2025-09-10T23:51:20.627963905Z" level=info msg="Container to stop \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:51:20.650110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff-rootfs.mount: Deactivated successfully. Sep 10 23:51:20.654440 systemd[1]: cri-containerd-46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8.scope: Deactivated successfully. Sep 10 23:51:20.659656 containerd[1899]: time="2025-09-10T23:51:20.659533169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" id:\"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" pid:3643 exit_status:137 exited_at:{seconds:1757548280 nanos:657965333}" Sep 10 23:51:20.671417 containerd[1899]: time="2025-09-10T23:51:20.671280521Z" level=info msg="StopContainer for \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" returns successfully" Sep 10 23:51:20.673468 containerd[1899]: time="2025-09-10T23:51:20.672947249Z" level=info msg="StopPodSandbox for \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\"" Sep 10 23:51:20.673468 containerd[1899]: time="2025-09-10T23:51:20.673043609Z" level=info msg="Container to stop \"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:51:20.673468 containerd[1899]: time="2025-09-10T23:51:20.673069289Z" level=info msg="Container to stop \"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:51:20.673468 containerd[1899]: time="2025-09-10T23:51:20.673090265Z" level=info msg="Container to stop \"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:51:20.673468 containerd[1899]: time="2025-09-10T23:51:20.673112958Z" level=info msg="Container to stop \"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:51:20.673468 containerd[1899]: time="2025-09-10T23:51:20.673133490Z" level=info msg="Container to stop \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:51:20.692284 systemd[1]: cri-containerd-e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8.scope: Deactivated successfully. Sep 10 23:51:20.732914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8-rootfs.mount: Deactivated successfully. Sep 10 23:51:20.743289 containerd[1899]: time="2025-09-10T23:51:20.743056734Z" level=info msg="shim disconnected" id=46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8 namespace=k8s.io Sep 10 23:51:20.745797 containerd[1899]: time="2025-09-10T23:51:20.744938214Z" level=warning msg="cleaning up after shim disconnected" id=46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8 namespace=k8s.io Sep 10 23:51:20.746322 containerd[1899]: time="2025-09-10T23:51:20.746280090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:51:20.746954 containerd[1899]: time="2025-09-10T23:51:20.746908062Z" level=info msg="received exit event sandbox_id:\"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" exit_status:137 exited_at:{seconds:1757548280 nanos:657965333}" Sep 10 23:51:20.748474 containerd[1899]: time="2025-09-10T23:51:20.745940274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" id:\"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" pid:3592 exit_status:137 exited_at:{seconds:1757548280 nanos:695423274}" Sep 10 23:51:20.753991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8-shm.mount: Deactivated successfully. Sep 10 23:51:20.759330 containerd[1899]: time="2025-09-10T23:51:20.750037206Z" level=info msg="TearDown network for sandbox \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" successfully" Sep 10 23:51:20.759876 containerd[1899]: time="2025-09-10T23:51:20.759833514Z" level=info msg="StopPodSandbox for \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" returns successfully" Sep 10 23:51:20.779837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8-rootfs.mount: Deactivated successfully. Sep 10 23:51:20.791075 containerd[1899]: time="2025-09-10T23:51:20.789066318Z" level=error msg="Failed to handle event container_id:\"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" id:\"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" pid:3592 exit_status:137 exited_at:{seconds:1757548280 nanos:695423274} for e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8" error="failed to handle container TaskExit event: failed to stop sandbox: ttrpc: closed" Sep 10 23:51:20.791075 containerd[1899]: time="2025-09-10T23:51:20.789295722Z" level=info msg="shim disconnected" id=e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8 namespace=k8s.io Sep 10 23:51:20.791075 containerd[1899]: time="2025-09-10T23:51:20.790519422Z" level=warning msg="cleaning up after shim disconnected" id=e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8 namespace=k8s.io Sep 10 23:51:20.791075 containerd[1899]: time="2025-09-10T23:51:20.790571742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:51:20.793849 kubelet[3434]: I0910 23:51:20.793808 3434 scope.go:117] "RemoveContainer" containerID="ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2" Sep 10 23:51:20.804399 containerd[1899]: time="2025-09-10T23:51:20.804330858Z" level=info msg="RemoveContainer for \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\"" Sep 10 23:51:20.832707 containerd[1899]: time="2025-09-10T23:51:20.832473834Z" level=info msg="RemoveContainer for \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" returns successfully" Sep 10 23:51:20.836878 kubelet[3434]: I0910 23:51:20.836680 3434 scope.go:117] "RemoveContainer" containerID="ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2" Sep 10 23:51:20.837741 containerd[1899]: time="2025-09-10T23:51:20.837547278Z" level=error msg="ContainerStatus for \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\": not found" Sep 10 23:51:20.838512 kubelet[3434]: E0910 23:51:20.838434 3434 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\": not found" containerID="ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2" Sep 10 23:51:20.839077 kubelet[3434]: I0910 23:51:20.838500 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2"} err="failed to get container status \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab491065e7c5ed9e3b4c6732045edbdbd3d26bbec6eca1830b5c161b648437c2\": not found" Sep 10 23:51:20.848833 containerd[1899]: time="2025-09-10T23:51:20.848672310Z" level=info msg="TearDown network for sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" successfully" Sep 10 23:51:20.848833 containerd[1899]: time="2025-09-10T23:51:20.848816118Z" level=info msg="StopPodSandbox for \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" returns successfully" Sep 10 23:51:20.849350 containerd[1899]: time="2025-09-10T23:51:20.849304554Z" level=info msg="received exit event sandbox_id:\"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" exit_status:137 exited_at:{seconds:1757548280 nanos:695423274}" Sep 10 23:51:20.899463 kubelet[3434]: I0910 23:51:20.899340 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j2w7\" (UniqueName: \"kubernetes.io/projected/e8084be0-04a9-4c71-a470-d071b9464654-kube-api-access-4j2w7\") pod \"e8084be0-04a9-4c71-a470-d071b9464654\" (UID: \"e8084be0-04a9-4c71-a470-d071b9464654\") " Sep 10 23:51:20.899850 kubelet[3434]: I0910 23:51:20.899719 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8084be0-04a9-4c71-a470-d071b9464654-cilium-config-path\") pod \"e8084be0-04a9-4c71-a470-d071b9464654\" (UID: \"e8084be0-04a9-4c71-a470-d071b9464654\") " Sep 10 23:51:20.905027 kubelet[3434]: I0910 23:51:20.904942 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8084be0-04a9-4c71-a470-d071b9464654-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8084be0-04a9-4c71-a470-d071b9464654" (UID: "e8084be0-04a9-4c71-a470-d071b9464654"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:51:20.906706 kubelet[3434]: I0910 23:51:20.906627 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8084be0-04a9-4c71-a470-d071b9464654-kube-api-access-4j2w7" (OuterVolumeSpecName: "kube-api-access-4j2w7") pod "e8084be0-04a9-4c71-a470-d071b9464654" (UID: "e8084be0-04a9-4c71-a470-d071b9464654"). InnerVolumeSpecName "kube-api-access-4j2w7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:51:21.001850 kubelet[3434]: I0910 23:51:21.000973 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-lib-modules\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.001850 kubelet[3434]: I0910 23:51:21.001037 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-bpf-maps\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.001850 kubelet[3434]: I0910 23:51:21.001088 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-config-path\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.001850 kubelet[3434]: I0910 23:51:21.001128 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-xtables-lock\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.001850 kubelet[3434]: I0910 23:51:21.001114 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.001850 kubelet[3434]: I0910 23:51:21.001170 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-clustermesh-secrets\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002237 kubelet[3434]: I0910 23:51:21.001205 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hostproc\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002237 kubelet[3434]: I0910 23:51:21.001237 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-kernel\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002237 kubelet[3434]: I0910 23:51:21.001278 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt6pp\" (UniqueName: \"kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-kube-api-access-dt6pp\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002237 kubelet[3434]: I0910 23:51:21.001310 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-net\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002237 kubelet[3434]: I0910 23:51:21.001349 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hubble-tls\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002237 kubelet[3434]: I0910 23:51:21.001386 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-run\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002518 kubelet[3434]: I0910 23:51:21.001438 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-cgroup\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002518 kubelet[3434]: I0910 23:51:21.001477 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cni-path\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002518 kubelet[3434]: I0910 23:51:21.001509 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-etc-cni-netd\") pod \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\" (UID: \"4ceb51d9-ff1e-4c52-ae2e-5bad8350c826\") " Sep 10 23:51:21.002518 kubelet[3434]: I0910 23:51:21.001577 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8084be0-04a9-4c71-a470-d071b9464654-cilium-config-path\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.002518 kubelet[3434]: I0910 23:51:21.001601 3434 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-lib-modules\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.002518 kubelet[3434]: I0910 23:51:21.001626 3434 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4j2w7\" (UniqueName: \"kubernetes.io/projected/e8084be0-04a9-4c71-a470-d071b9464654-kube-api-access-4j2w7\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.004427 kubelet[3434]: I0910 23:51:21.001666 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.005012 kubelet[3434]: I0910 23:51:21.003717 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.005012 kubelet[3434]: I0910 23:51:21.004476 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.005012 kubelet[3434]: I0910 23:51:21.004624 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.005012 kubelet[3434]: I0910 23:51:21.004665 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hostproc" (OuterVolumeSpecName: "hostproc") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.005012 kubelet[3434]: I0910 23:51:21.004769 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.008330 kubelet[3434]: I0910 23:51:21.007944 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.008972 kubelet[3434]: I0910 23:51:21.008895 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.009657 kubelet[3434]: I0910 23:51:21.009444 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cni-path" (OuterVolumeSpecName: "cni-path") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 23:51:21.018957 kubelet[3434]: I0910 23:51:21.018871 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:51:21.022060 kubelet[3434]: I0910 23:51:21.022007 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-kube-api-access-dt6pp" (OuterVolumeSpecName: "kube-api-access-dt6pp") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "kube-api-access-dt6pp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:51:21.023910 kubelet[3434]: I0910 23:51:21.023825 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:51:21.024371 kubelet[3434]: I0910 23:51:21.024308 3434 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" (UID: "4ceb51d9-ff1e-4c52-ae2e-5bad8350c826"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:51:21.101984 kubelet[3434]: I0910 23:51:21.101855 3434 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-bpf-maps\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.101984 kubelet[3434]: I0910 23:51:21.101900 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-config-path\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.101984 kubelet[3434]: I0910 23:51:21.101924 3434 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-xtables-lock\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.101984 kubelet[3434]: I0910 23:51:21.101945 3434 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-clustermesh-secrets\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.101984 kubelet[3434]: I0910 23:51:21.101979 3434 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hostproc\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102006 3434 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-kernel\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102030 3434 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dt6pp\" (UniqueName: \"kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-kube-api-access-dt6pp\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102051 3434 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-host-proc-sys-net\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102080 3434 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-hubble-tls\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102102 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-run\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102123 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cilium-cgroup\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102143 3434 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-cni-path\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.102357 kubelet[3434]: I0910 23:51:21.102166 3434 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826-etc-cni-netd\") on node \"ip-172-31-30-159\" DevicePath \"\"" Sep 10 23:51:21.515954 kubelet[3434]: E0910 23:51:21.515895 3434 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 23:51:21.583631 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8-shm.mount: Deactivated successfully. Sep 10 23:51:21.583859 systemd[1]: var-lib-kubelet-pods-e8084be0\x2d04a9\x2d4c71\x2da470\x2dd071b9464654-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4j2w7.mount: Deactivated successfully. Sep 10 23:51:21.583992 systemd[1]: var-lib-kubelet-pods-4ceb51d9\x2dff1e\x2d4c52\x2dae2e\x2d5bad8350c826-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddt6pp.mount: Deactivated successfully. Sep 10 23:51:21.584125 systemd[1]: var-lib-kubelet-pods-4ceb51d9\x2dff1e\x2d4c52\x2dae2e\x2d5bad8350c826-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 23:51:21.584254 systemd[1]: var-lib-kubelet-pods-4ceb51d9\x2dff1e\x2d4c52\x2dae2e\x2d5bad8350c826-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 23:51:21.843929 systemd[1]: Removed slice kubepods-besteffort-pode8084be0_04a9_4c71_a470_d071b9464654.slice - libcontainer container kubepods-besteffort-pode8084be0_04a9_4c71_a470_d071b9464654.slice. Sep 10 23:51:21.849725 kubelet[3434]: I0910 23:51:21.849191 3434 scope.go:117] "RemoveContainer" containerID="2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff" Sep 10 23:51:21.860080 containerd[1899]: time="2025-09-10T23:51:21.859918603Z" level=info msg="RemoveContainer for \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\"" Sep 10 23:51:21.877609 systemd[1]: Removed slice kubepods-burstable-pod4ceb51d9_ff1e_4c52_ae2e_5bad8350c826.slice - libcontainer container kubepods-burstable-pod4ceb51d9_ff1e_4c52_ae2e_5bad8350c826.slice. Sep 10 23:51:21.878401 systemd[1]: kubepods-burstable-pod4ceb51d9_ff1e_4c52_ae2e_5bad8350c826.slice: Consumed 14.357s CPU time, 126M memory peak, 120K read from disk, 15M written to disk. Sep 10 23:51:21.881505 containerd[1899]: time="2025-09-10T23:51:21.881437136Z" level=info msg="RemoveContainer for \"2d72281b6321114d9a2af10a952a12b5c2e3aa4fa0c9c64e83fda2d078e2efff\" returns successfully" Sep 10 23:51:21.882107 kubelet[3434]: I0910 23:51:21.882006 3434 scope.go:117] "RemoveContainer" containerID="c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4" Sep 10 23:51:21.888507 containerd[1899]: time="2025-09-10T23:51:21.887858132Z" level=info msg="RemoveContainer for \"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\"" Sep 10 23:51:21.902283 containerd[1899]: time="2025-09-10T23:51:21.902207384Z" level=info msg="RemoveContainer for \"c0c32e0a83009ff9ccbe5fd4c0a454dabb3e06f4232ba0f6dfb77121e4b7e2d4\" returns successfully" Sep 10 23:51:21.903953 kubelet[3434]: I0910 23:51:21.903900 3434 scope.go:117] "RemoveContainer" containerID="b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911" Sep 10 23:51:21.917717 containerd[1899]: time="2025-09-10T23:51:21.916102928Z" level=info msg="RemoveContainer for \"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\"" Sep 10 23:51:21.926917 containerd[1899]: time="2025-09-10T23:51:21.926869076Z" level=info msg="RemoveContainer for \"b1a05c09d42ef63eea3c347799c59806363729446862a130e4dbff3468007911\" returns successfully" Sep 10 23:51:21.927671 kubelet[3434]: I0910 23:51:21.927630 3434 scope.go:117] "RemoveContainer" containerID="97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e" Sep 10 23:51:21.931001 containerd[1899]: time="2025-09-10T23:51:21.930957692Z" level=info msg="RemoveContainer for \"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\"" Sep 10 23:51:21.938089 containerd[1899]: time="2025-09-10T23:51:21.938039228Z" level=info msg="RemoveContainer for \"97ffc4a81cab2055b9f4d70947ef5a02bfa5eec8a7a8a230cc2e3cbb4d5c069e\" returns successfully" Sep 10 23:51:21.938583 kubelet[3434]: I0910 23:51:21.938553 3434 scope.go:117] "RemoveContainer" containerID="164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941" Sep 10 23:51:21.941741 containerd[1899]: time="2025-09-10T23:51:21.941573180Z" level=info msg="RemoveContainer for \"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\"" Sep 10 23:51:21.948207 containerd[1899]: time="2025-09-10T23:51:21.948141632Z" level=info msg="RemoveContainer for \"164f0521e7a0edb7ed917ef58c26e5ae3f74e678ee950e2841aa3252050ae941\" returns successfully" Sep 10 23:51:22.309654 kubelet[3434]: I0910 23:51:22.309562 3434 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ceb51d9-ff1e-4c52-ae2e-5bad8350c826" path="/var/lib/kubelet/pods/4ceb51d9-ff1e-4c52-ae2e-5bad8350c826/volumes" Sep 10 23:51:22.311583 kubelet[3434]: I0910 23:51:22.311523 3434 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8084be0-04a9-4c71-a470-d071b9464654" path="/var/lib/kubelet/pods/e8084be0-04a9-4c71-a470-d071b9464654/volumes" Sep 10 23:51:22.383229 sshd[5006]: Connection closed by 139.178.68.195 port 44692 Sep 10 23:51:22.382126 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:22.391062 systemd[1]: sshd@26-172.31.30.159:22-139.178.68.195:44692.service: Deactivated successfully. Sep 10 23:51:22.396338 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 23:51:22.397340 systemd[1]: session-27.scope: Consumed 1.234s CPU time, 23.5M memory peak. Sep 10 23:51:22.402261 systemd-logind[1863]: Session 27 logged out. Waiting for processes to exit. Sep 10 23:51:22.424426 systemd[1]: Started sshd@27-172.31.30.159:22-139.178.68.195:59804.service - OpenSSH per-connection server daemon (139.178.68.195:59804). Sep 10 23:51:22.429319 systemd-logind[1863]: Removed session 27. Sep 10 23:51:22.612533 containerd[1899]: time="2025-09-10T23:51:22.612413215Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" id:\"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" pid:3592 exit_status:137 exited_at:{seconds:1757548280 nanos:695423274}" Sep 10 23:51:22.620590 sshd[5161]: Accepted publickey for core from 139.178.68.195 port 59804 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:22.623215 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:22.632242 systemd-logind[1863]: New session 28 of user core. Sep 10 23:51:22.649954 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 10 23:51:23.094080 ntpd[1857]: Deleting interface #12 lxc_health, fe80::b83a:90ff:febb:cbbf%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Sep 10 23:51:23.094608 ntpd[1857]: 10 Sep 23:51:23 ntpd[1857]: Deleting interface #12 lxc_health, fe80::b83a:90ff:febb:cbbf%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Sep 10 23:51:24.395031 sshd[5164]: Connection closed by 139.178.68.195 port 59804 Sep 10 23:51:24.397486 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:24.411466 systemd[1]: sshd@27-172.31.30.159:22-139.178.68.195:59804.service: Deactivated successfully. Sep 10 23:51:24.426274 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 23:51:24.429933 systemd[1]: session-28.scope: Consumed 1.461s CPU time, 23.7M memory peak. Sep 10 23:51:24.435036 systemd-logind[1863]: Session 28 logged out. Waiting for processes to exit. Sep 10 23:51:24.472138 systemd[1]: Started sshd@28-172.31.30.159:22-139.178.68.195:59806.service - OpenSSH per-connection server daemon (139.178.68.195:59806). Sep 10 23:51:24.475750 systemd-logind[1863]: Removed session 28. Sep 10 23:51:24.502189 systemd[1]: Created slice kubepods-burstable-pod53b1fbfd_ebd2_43f3_8676_10c29d5150fc.slice - libcontainer container kubepods-burstable-pod53b1fbfd_ebd2_43f3_8676_10c29d5150fc.slice. Sep 10 23:51:24.522953 kubelet[3434]: I0910 23:51:24.522901 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-cilium-config-path\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.525578 kubelet[3434]: I0910 23:51:24.523835 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-etc-cni-netd\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.525578 kubelet[3434]: I0910 23:51:24.523900 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-cilium-ipsec-secrets\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.525578 kubelet[3434]: I0910 23:51:24.523942 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-hostproc\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.525578 kubelet[3434]: I0910 23:51:24.523979 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-cni-path\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.525578 kubelet[3434]: I0910 23:51:24.524017 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-host-proc-sys-net\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.525578 kubelet[3434]: I0910 23:51:24.524050 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-lib-modules\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526087 kubelet[3434]: I0910 23:51:24.524083 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-xtables-lock\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526087 kubelet[3434]: I0910 23:51:24.524124 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-bpf-maps\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526087 kubelet[3434]: I0910 23:51:24.524157 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-cilium-cgroup\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526087 kubelet[3434]: I0910 23:51:24.524198 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-clustermesh-secrets\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526087 kubelet[3434]: I0910 23:51:24.524235 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlx9\" (UniqueName: \"kubernetes.io/projected/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-kube-api-access-2wlx9\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526087 kubelet[3434]: I0910 23:51:24.524275 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-cilium-run\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526460 kubelet[3434]: I0910 23:51:24.524309 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-host-proc-sys-kernel\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.526460 kubelet[3434]: I0910 23:51:24.524342 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53b1fbfd-ebd2-43f3-8676-10c29d5150fc-hubble-tls\") pod \"cilium-r2npt\" (UID: \"53b1fbfd-ebd2-43f3-8676-10c29d5150fc\") " pod="kube-system/cilium-r2npt" Sep 10 23:51:24.727290 sshd[5175]: Accepted publickey for core from 139.178.68.195 port 59806 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:24.728993 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:24.737787 systemd-logind[1863]: New session 29 of user core. Sep 10 23:51:24.749978 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 10 23:51:24.834616 containerd[1899]: time="2025-09-10T23:51:24.834498274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r2npt,Uid:53b1fbfd-ebd2-43f3-8676-10c29d5150fc,Namespace:kube-system,Attempt:0,}" Sep 10 23:51:24.869792 sshd[5181]: Connection closed by 139.178.68.195 port 59806 Sep 10 23:51:24.869673 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:24.882612 containerd[1899]: time="2025-09-10T23:51:24.882559630Z" level=info msg="connecting to shim 4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e" address="unix:///run/containerd/s/837efcf98967c1cc12bf70c123ce147753b3cef9b2ded14f82e1bb85e2e3580d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:51:24.884139 systemd[1]: sshd@28-172.31.30.159:22-139.178.68.195:59806.service: Deactivated successfully. Sep 10 23:51:24.893444 systemd[1]: session-29.scope: Deactivated successfully. Sep 10 23:51:24.898882 systemd-logind[1863]: Session 29 logged out. Waiting for processes to exit. Sep 10 23:51:24.922123 systemd[1]: Started sshd@29-172.31.30.159:22-139.178.68.195:59816.service - OpenSSH per-connection server daemon (139.178.68.195:59816). Sep 10 23:51:24.927260 systemd-logind[1863]: Removed session 29. Sep 10 23:51:24.939168 systemd[1]: Started cri-containerd-4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e.scope - libcontainer container 4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e. Sep 10 23:51:25.003325 containerd[1899]: time="2025-09-10T23:51:25.003187279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r2npt,Uid:53b1fbfd-ebd2-43f3-8676-10c29d5150fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\"" Sep 10 23:51:25.016721 containerd[1899]: time="2025-09-10T23:51:25.016605679Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 23:51:25.038717 containerd[1899]: time="2025-09-10T23:51:25.035963995Z" level=info msg="Container 3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:51:25.049458 containerd[1899]: time="2025-09-10T23:51:25.049409371Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3\"" Sep 10 23:51:25.052531 containerd[1899]: time="2025-09-10T23:51:25.052462015Z" level=info msg="StartContainer for \"3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3\"" Sep 10 23:51:25.054387 containerd[1899]: time="2025-09-10T23:51:25.054320251Z" level=info msg="connecting to shim 3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3" address="unix:///run/containerd/s/837efcf98967c1cc12bf70c123ce147753b3cef9b2ded14f82e1bb85e2e3580d" protocol=ttrpc version=3 Sep 10 23:51:25.089962 systemd[1]: Started cri-containerd-3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3.scope - libcontainer container 3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3. Sep 10 23:51:25.143750 sshd[5219]: Accepted publickey for core from 139.178.68.195 port 59816 ssh2: RSA SHA256:ja8Z659dnX0Tz1pZfaOwRz2q/KALpEA2JWSy/+nC98s Sep 10 23:51:25.147107 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:51:25.163170 systemd-logind[1863]: New session 30 of user core. Sep 10 23:51:25.170471 containerd[1899]: time="2025-09-10T23:51:25.170422088Z" level=info msg="StartContainer for \"3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3\" returns successfully" Sep 10 23:51:25.171244 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 10 23:51:25.195363 systemd[1]: cri-containerd-3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3.scope: Deactivated successfully. Sep 10 23:51:25.200312 containerd[1899]: time="2025-09-10T23:51:25.200164568Z" level=info msg="received exit event container_id:\"3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3\" id:\"3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3\" pid:5248 exited_at:{seconds:1757548285 nanos:199435112}" Sep 10 23:51:25.200312 containerd[1899]: time="2025-09-10T23:51:25.200300708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3\" id:\"3e96ca348542f9ad2ccb8a4aae59fe7d518f507246ac640953d78309310d98c3\" pid:5248 exited_at:{seconds:1757548285 nanos:199435112}" Sep 10 23:51:25.886677 containerd[1899]: time="2025-09-10T23:51:25.885792107Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 23:51:25.904935 containerd[1899]: time="2025-09-10T23:51:25.904862855Z" level=info msg="Container 3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:51:25.925118 containerd[1899]: time="2025-09-10T23:51:25.925070256Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964\"" Sep 10 23:51:25.927173 containerd[1899]: time="2025-09-10T23:51:25.927026880Z" level=info msg="StartContainer for \"3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964\"" Sep 10 23:51:25.930256 containerd[1899]: time="2025-09-10T23:51:25.930159360Z" level=info msg="connecting to shim 3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964" address="unix:///run/containerd/s/837efcf98967c1cc12bf70c123ce147753b3cef9b2ded14f82e1bb85e2e3580d" protocol=ttrpc version=3 Sep 10 23:51:25.982006 systemd[1]: Started cri-containerd-3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964.scope - libcontainer container 3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964. Sep 10 23:51:26.053327 containerd[1899]: time="2025-09-10T23:51:26.053066996Z" level=info msg="StartContainer for \"3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964\" returns successfully" Sep 10 23:51:26.060103 systemd[1]: cri-containerd-3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964.scope: Deactivated successfully. Sep 10 23:51:26.065431 containerd[1899]: time="2025-09-10T23:51:26.065228528Z" level=info msg="received exit event container_id:\"3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964\" id:\"3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964\" pid:5298 exited_at:{seconds:1757548286 nanos:62381156}" Sep 10 23:51:26.068396 containerd[1899]: time="2025-09-10T23:51:26.068316920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964\" id:\"3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964\" pid:5298 exited_at:{seconds:1757548286 nanos:62381156}" Sep 10 23:51:26.106850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c66408c2ddfce455179bba0b8af681308d7afc5abec6aeec71cc5638bcd8964-rootfs.mount: Deactivated successfully. Sep 10 23:51:26.517268 kubelet[3434]: E0910 23:51:26.517188 3434 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 23:51:26.898978 containerd[1899]: time="2025-09-10T23:51:26.898902588Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 23:51:26.935202 containerd[1899]: time="2025-09-10T23:51:26.933004249Z" level=info msg="Container 2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:51:26.936233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624006361.mount: Deactivated successfully. Sep 10 23:51:26.954740 containerd[1899]: time="2025-09-10T23:51:26.954651109Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932\"" Sep 10 23:51:26.957515 containerd[1899]: time="2025-09-10T23:51:26.957452869Z" level=info msg="StartContainer for \"2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932\"" Sep 10 23:51:26.964196 containerd[1899]: time="2025-09-10T23:51:26.964079869Z" level=info msg="connecting to shim 2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932" address="unix:///run/containerd/s/837efcf98967c1cc12bf70c123ce147753b3cef9b2ded14f82e1bb85e2e3580d" protocol=ttrpc version=3 Sep 10 23:51:27.020028 systemd[1]: Started cri-containerd-2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932.scope - libcontainer container 2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932. Sep 10 23:51:27.109427 systemd[1]: cri-containerd-2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932.scope: Deactivated successfully. Sep 10 23:51:27.116362 containerd[1899]: time="2025-09-10T23:51:27.116172250Z" level=info msg="received exit event container_id:\"2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932\" id:\"2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932\" pid:5343 exited_at:{seconds:1757548287 nanos:110900037}" Sep 10 23:51:27.117345 containerd[1899]: time="2025-09-10T23:51:27.117290998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932\" id:\"2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932\" pid:5343 exited_at:{seconds:1757548287 nanos:110900037}" Sep 10 23:51:27.120929 containerd[1899]: time="2025-09-10T23:51:27.120660934Z" level=info msg="StartContainer for \"2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932\" returns successfully" Sep 10 23:51:27.161977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2678412e30d5bd3b5dc922b9ec1f852d5fc5b262037cbf7b4782473812db8932-rootfs.mount: Deactivated successfully. Sep 10 23:51:27.901139 containerd[1899]: time="2025-09-10T23:51:27.900931849Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 23:51:27.929133 containerd[1899]: time="2025-09-10T23:51:27.928106030Z" level=info msg="Container d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:51:27.952363 containerd[1899]: time="2025-09-10T23:51:27.952306394Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460\"" Sep 10 23:51:27.954960 containerd[1899]: time="2025-09-10T23:51:27.954905174Z" level=info msg="StartContainer for \"d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460\"" Sep 10 23:51:27.957713 containerd[1899]: time="2025-09-10T23:51:27.957086978Z" level=info msg="connecting to shim d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460" address="unix:///run/containerd/s/837efcf98967c1cc12bf70c123ce147753b3cef9b2ded14f82e1bb85e2e3580d" protocol=ttrpc version=3 Sep 10 23:51:28.010061 systemd[1]: Started cri-containerd-d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460.scope - libcontainer container d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460. Sep 10 23:51:28.064971 systemd[1]: cri-containerd-d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460.scope: Deactivated successfully. Sep 10 23:51:28.070763 containerd[1899]: time="2025-09-10T23:51:28.070174726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460\" id:\"d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460\" pid:5382 exited_at:{seconds:1757548288 nanos:69595210}" Sep 10 23:51:28.070763 containerd[1899]: time="2025-09-10T23:51:28.070358290Z" level=info msg="received exit event container_id:\"d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460\" id:\"d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460\" pid:5382 exited_at:{seconds:1757548288 nanos:69595210}" Sep 10 23:51:28.085047 containerd[1899]: time="2025-09-10T23:51:28.085003294Z" level=info msg="StartContainer for \"d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460\" returns successfully" Sep 10 23:51:28.114357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7610305d0801ae9d2822337ff1c0796d60bfd2b25837fcfda9efcb947afa460-rootfs.mount: Deactivated successfully. Sep 10 23:51:28.680825 kubelet[3434]: I0910 23:51:28.680741 3434 setters.go:618] "Node became not ready" node="ip-172-31-30-159" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T23:51:28Z","lastTransitionTime":"2025-09-10T23:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 23:51:28.927669 containerd[1899]: time="2025-09-10T23:51:28.927599055Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 23:51:28.956814 containerd[1899]: time="2025-09-10T23:51:28.954076143Z" level=info msg="Container 0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:51:28.976075 containerd[1899]: time="2025-09-10T23:51:28.975994335Z" level=info msg="CreateContainer within sandbox \"4da7ba0a4779b7192d5d8fac9054b2c344e89003597096ec68367825e3b2ba0e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\"" Sep 10 23:51:28.977900 containerd[1899]: time="2025-09-10T23:51:28.977856099Z" level=info msg="StartContainer for \"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\"" Sep 10 23:51:28.982661 containerd[1899]: time="2025-09-10T23:51:28.982545591Z" level=info msg="connecting to shim 0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798" address="unix:///run/containerd/s/837efcf98967c1cc12bf70c123ce147753b3cef9b2ded14f82e1bb85e2e3580d" protocol=ttrpc version=3 Sep 10 23:51:29.023999 systemd[1]: Started cri-containerd-0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798.scope - libcontainer container 0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798. Sep 10 23:51:29.098782 containerd[1899]: time="2025-09-10T23:51:29.098721563Z" level=info msg="StartContainer for \"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\" returns successfully" Sep 10 23:51:29.226644 containerd[1899]: time="2025-09-10T23:51:29.226484640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\" id:\"3aa3af1d9fd9ac8fc0218a8298228312a77693abde0a99489b643752e9392b4c\" pid:5452 exited_at:{seconds:1757548289 nanos:225415392}" Sep 10 23:51:29.987768 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 23:51:31.756182 containerd[1899]: time="2025-09-10T23:51:31.756102533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\" id:\"627bccd2fe790a266a9658e2945577b3977ef70af8a795e9df4b9f1ff28ca006\" pid:5545 exit_status:1 exited_at:{seconds:1757548291 nanos:754263593}" Sep 10 23:51:33.970504 containerd[1899]: time="2025-09-10T23:51:33.970443032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\" id:\"b626b575fa65d10a3ecc01e9415a3b38b40af58e29c113b70776322a1936c18d\" pid:5890 exit_status:1 exited_at:{seconds:1757548293 nanos:969943340}" Sep 10 23:51:34.223590 (udev-worker)[5957]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:51:34.230591 (udev-worker)[5958]: Network interface NamePolicy= disabled on kernel command line. Sep 10 23:51:34.231635 systemd-networkd[1819]: lxc_health: Link UP Sep 10 23:51:34.242252 systemd-networkd[1819]: lxc_health: Gained carrier Sep 10 23:51:34.869949 kubelet[3434]: I0910 23:51:34.869805 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r2npt" podStartSLOduration=10.86977652 podStartE2EDuration="10.86977652s" podCreationTimestamp="2025-09-10 23:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:51:29.962056384 +0000 UTC m=+113.960657415" watchObservedRunningTime="2025-09-10 23:51:34.86977652 +0000 UTC m=+118.868377635" Sep 10 23:51:35.511921 systemd-networkd[1819]: lxc_health: Gained IPv6LL Sep 10 23:51:36.039096 update_engine[1867]: I20250910 23:51:36.039000 1867 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 10 23:51:36.039096 update_engine[1867]: I20250910 23:51:36.039078 1867 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 10 23:51:36.039731 update_engine[1867]: I20250910 23:51:36.039489 1867 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 10 23:51:36.041940 update_engine[1867]: I20250910 23:51:36.041865 1867 omaha_request_params.cc:62] Current group set to beta Sep 10 23:51:36.042088 update_engine[1867]: I20250910 23:51:36.042040 1867 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 10 23:51:36.042088 update_engine[1867]: I20250910 23:51:36.042063 1867 update_attempter.cc:643] Scheduling an action processor start. Sep 10 23:51:36.042181 update_engine[1867]: I20250910 23:51:36.042095 1867 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 10 23:51:36.042181 update_engine[1867]: I20250910 23:51:36.042155 1867 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 10 23:51:36.042291 update_engine[1867]: I20250910 23:51:36.042263 1867 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 10 23:51:36.042291 update_engine[1867]: I20250910 23:51:36.042280 1867 omaha_request_action.cc:272] Request: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042291 update_engine[1867]: Sep 10 23:51:36.042745 update_engine[1867]: I20250910 23:51:36.042296 1867 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 10 23:51:36.043426 locksmithd[1910]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 10 23:51:36.045645 update_engine[1867]: I20250910 23:51:36.045567 1867 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 10 23:51:36.046448 update_engine[1867]: I20250910 23:51:36.046370 1867 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 10 23:51:36.079642 update_engine[1867]: E20250910 23:51:36.079553 1867 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 10 23:51:36.080350 update_engine[1867]: I20250910 23:51:36.080258 1867 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 10 23:51:36.274459 containerd[1899]: time="2025-09-10T23:51:36.274374415Z" level=info msg="StopPodSandbox for \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\"" Sep 10 23:51:36.276095 containerd[1899]: time="2025-09-10T23:51:36.275969023Z" level=info msg="TearDown network for sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" successfully" Sep 10 23:51:36.276095 containerd[1899]: time="2025-09-10T23:51:36.276052471Z" level=info msg="StopPodSandbox for \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" returns successfully" Sep 10 23:51:36.277855 containerd[1899]: time="2025-09-10T23:51:36.277504939Z" level=info msg="RemovePodSandbox for \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\"" Sep 10 23:51:36.278176 containerd[1899]: time="2025-09-10T23:51:36.278100091Z" level=info msg="Forcibly stopping sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\"" Sep 10 23:51:36.278661 containerd[1899]: time="2025-09-10T23:51:36.278587303Z" level=info msg="TearDown network for sandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" successfully" Sep 10 23:51:36.283157 containerd[1899]: time="2025-09-10T23:51:36.283078087Z" level=info msg="Ensure that sandbox e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8 in task-service has been cleanup successfully" Sep 10 23:51:36.297867 containerd[1899]: time="2025-09-10T23:51:36.297208891Z" level=info msg="RemovePodSandbox \"e573c23623bbcc1d9b5b4bfbb38e33ec47e4191c8cbdc3749c91390df3c47dd8\" returns successfully" Sep 10 23:51:36.300738 containerd[1899]: time="2025-09-10T23:51:36.300632503Z" level=info msg="StopPodSandbox for \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\"" Sep 10 23:51:36.301280 containerd[1899]: time="2025-09-10T23:51:36.301122175Z" level=info msg="TearDown network for sandbox \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" successfully" Sep 10 23:51:36.301280 containerd[1899]: time="2025-09-10T23:51:36.301160959Z" level=info msg="StopPodSandbox for \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" returns successfully" Sep 10 23:51:36.304162 containerd[1899]: time="2025-09-10T23:51:36.303889579Z" level=info msg="RemovePodSandbox for \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\"" Sep 10 23:51:36.304162 containerd[1899]: time="2025-09-10T23:51:36.303942883Z" level=info msg="Forcibly stopping sandbox \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\"" Sep 10 23:51:36.304506 containerd[1899]: time="2025-09-10T23:51:36.304471135Z" level=info msg="TearDown network for sandbox \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" successfully" Sep 10 23:51:36.314967 containerd[1899]: time="2025-09-10T23:51:36.314255635Z" level=info msg="Ensure that sandbox 46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8 in task-service has been cleanup successfully" Sep 10 23:51:36.324943 containerd[1899]: time="2025-09-10T23:51:36.324887275Z" level=info msg="RemovePodSandbox \"46beed377addb1dee1712bba9763950142e72951eee1204a68240a308ae77ea8\" returns successfully" Sep 10 23:51:36.434211 containerd[1899]: time="2025-09-10T23:51:36.434160668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\" id:\"522c37ee682885b40e4eb17e1b9974ad0a41f421fe9091ce85908cbbad38a0ac\" pid:5987 exited_at:{seconds:1757548296 nanos:432409904}" Sep 10 23:51:36.448938 kubelet[3434]: E0910 23:51:36.448742 3434 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44416->127.0.0.1:42851: write tcp 127.0.0.1:44416->127.0.0.1:42851: write: broken pipe Sep 10 23:51:38.094151 ntpd[1857]: Listen normally on 15 lxc_health [fe80::d4f2:b0ff:fe64:a5cd%14]:123 Sep 10 23:51:38.094666 ntpd[1857]: 10 Sep 23:51:38 ntpd[1857]: Listen normally on 15 lxc_health [fe80::d4f2:b0ff:fe64:a5cd%14]:123 Sep 10 23:51:38.688891 containerd[1899]: time="2025-09-10T23:51:38.688820051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\" id:\"c2d564896cded83df9e7a40f027aa3fbadf046af34b2ab4dbc5fd088e056bce3\" pid:6016 exited_at:{seconds:1757548298 nanos:687060371}" Sep 10 23:51:40.947249 containerd[1899]: time="2025-09-10T23:51:40.946967114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fadacf8a633a896a5bb3839f2c1834280047858d86078ecae2aa1775add4798\" id:\"5a44e01a3b65e3f9a02c1f8d2880619e501e22fe8f3e6319d58f08395f36fe7a\" pid:6042 exited_at:{seconds:1757548300 nanos:942894038}" Sep 10 23:51:40.953913 kubelet[3434]: E0910 23:51:40.953839 3434 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44430->127.0.0.1:42851: write tcp 127.0.0.1:44430->127.0.0.1:42851: write: broken pipe Sep 10 23:51:40.991319 sshd[5262]: Connection closed by 139.178.68.195 port 59816 Sep 10 23:51:40.991976 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Sep 10 23:51:41.000343 systemd[1]: sshd@29-172.31.30.159:22-139.178.68.195:59816.service: Deactivated successfully. Sep 10 23:51:41.006464 systemd[1]: session-30.scope: Deactivated successfully. Sep 10 23:51:41.013711 systemd-logind[1863]: Session 30 logged out. Waiting for processes to exit. Sep 10 23:51:41.019094 systemd-logind[1863]: Removed session 30. Sep 10 23:51:46.037604 update_engine[1867]: I20250910 23:51:46.036706 1867 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 10 23:51:46.037604 update_engine[1867]: I20250910 23:51:46.037053 1867 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 10 23:51:46.037604 update_engine[1867]: I20250910 23:51:46.037446 1867 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 10 23:51:46.038541 update_engine[1867]: E20250910 23:51:46.038476 1867 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 10 23:51:46.038616 update_engine[1867]: I20250910 23:51:46.038571 1867 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 10 23:51:55.906783 systemd[1]: cri-containerd-5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f.scope: Deactivated successfully. Sep 10 23:51:55.907387 systemd[1]: cri-containerd-5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f.scope: Consumed 5.862s CPU time, 56.2M memory peak. Sep 10 23:51:55.915258 containerd[1899]: time="2025-09-10T23:51:55.915190253Z" level=info msg="received exit event container_id:\"5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f\" id:\"5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f\" pid:3049 exit_status:1 exited_at:{seconds:1757548315 nanos:912893369}" Sep 10 23:51:55.917060 containerd[1899]: time="2025-09-10T23:51:55.916928369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f\" id:\"5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f\" pid:3049 exit_status:1 exited_at:{seconds:1757548315 nanos:912893369}" Sep 10 23:51:55.957742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f-rootfs.mount: Deactivated successfully. Sep 10 23:51:56.008360 kubelet[3434]: I0910 23:51:56.008309 3434 scope.go:117] "RemoveContainer" containerID="5b8754b67a65bb2fc3033f113c04a48ef8a49bd16c63f1174e0f70bbc0cd673f" Sep 10 23:51:56.015760 containerd[1899]: time="2025-09-10T23:51:56.013523533Z" level=info msg="CreateContainer within sandbox \"d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 10 23:51:56.033729 containerd[1899]: time="2025-09-10T23:51:56.032050993Z" level=info msg="Container 3cbcd065992260dbd25312d73b51e5cdd2710729a44bf50f0956c0764f5bdfc8: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:51:56.033874 update_engine[1867]: I20250910 23:51:56.032737 1867 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 10 23:51:56.033874 update_engine[1867]: I20250910 23:51:56.033039 1867 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 10 23:51:56.033874 update_engine[1867]: I20250910 23:51:56.033438 1867 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 10 23:51:56.035898 update_engine[1867]: E20250910 23:51:56.035828 1867 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 10 23:51:56.036017 update_engine[1867]: I20250910 23:51:56.035933 1867 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 10 23:51:56.041339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035932040.mount: Deactivated successfully. Sep 10 23:51:56.053363 containerd[1899]: time="2025-09-10T23:51:56.053286889Z" level=info msg="CreateContainer within sandbox \"d99ed44062db5d9339bee86523c9f70826933fd6fe67de174766a53644fb6f7b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3cbcd065992260dbd25312d73b51e5cdd2710729a44bf50f0956c0764f5bdfc8\"" Sep 10 23:51:56.054194 containerd[1899]: time="2025-09-10T23:51:56.054148741Z" level=info msg="StartContainer for \"3cbcd065992260dbd25312d73b51e5cdd2710729a44bf50f0956c0764f5bdfc8\"" Sep 10 23:51:56.056343 containerd[1899]: time="2025-09-10T23:51:56.056287141Z" level=info msg="connecting to shim 3cbcd065992260dbd25312d73b51e5cdd2710729a44bf50f0956c0764f5bdfc8" address="unix:///run/containerd/s/139459ee8625e52d3dbe1b1abca9663e1679ef2090f1a19883bbd38eaf0aaf54" protocol=ttrpc version=3 Sep 10 23:51:56.098007 systemd[1]: Started cri-containerd-3cbcd065992260dbd25312d73b51e5cdd2710729a44bf50f0956c0764f5bdfc8.scope - libcontainer container 3cbcd065992260dbd25312d73b51e5cdd2710729a44bf50f0956c0764f5bdfc8. Sep 10 23:51:56.189747 containerd[1899]: time="2025-09-10T23:51:56.189585866Z" level=info msg="StartContainer for \"3cbcd065992260dbd25312d73b51e5cdd2710729a44bf50f0956c0764f5bdfc8\" returns successfully" Sep 10 23:51:58.590052 kubelet[3434]: E0910 23:51:58.589510 3434 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-159?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 10 23:51:59.278849 systemd[1]: cri-containerd-c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e.scope: Deactivated successfully. Sep 10 23:51:59.279396 systemd[1]: cri-containerd-c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e.scope: Consumed 4.080s CPU time, 20.8M memory peak. Sep 10 23:51:59.282317 containerd[1899]: time="2025-09-10T23:51:59.282240989Z" level=info msg="received exit event container_id:\"c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e\" id:\"c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e\" pid:3088 exit_status:1 exited_at:{seconds:1757548319 nanos:281785733}" Sep 10 23:51:59.284453 containerd[1899]: time="2025-09-10T23:51:59.284347769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e\" id:\"c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e\" pid:3088 exit_status:1 exited_at:{seconds:1757548319 nanos:281785733}" Sep 10 23:51:59.323967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e-rootfs.mount: Deactivated successfully. Sep 10 23:52:00.028719 kubelet[3434]: I0910 23:52:00.027539 3434 scope.go:117] "RemoveContainer" containerID="c84028be752530f783f4ea7e7cad561960dfa8e1522a9b08f0c2573fee80027e" Sep 10 23:52:00.035087 containerd[1899]: time="2025-09-10T23:52:00.035015765Z" level=info msg="CreateContainer within sandbox \"c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 10 23:52:00.054036 containerd[1899]: time="2025-09-10T23:52:00.053968517Z" level=info msg="Container ab179efbc051f060bb8adc11272f51d87868a3a84b42fdafa8667427d8764423: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:52:00.073990 containerd[1899]: time="2025-09-10T23:52:00.073912373Z" level=info msg="CreateContainer within sandbox \"c76d3a98021acc977b314391fe1f16dac5da4d279aa18f74e1d439aec1f6e328\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ab179efbc051f060bb8adc11272f51d87868a3a84b42fdafa8667427d8764423\"" Sep 10 23:52:00.074863 containerd[1899]: time="2025-09-10T23:52:00.074811929Z" level=info msg="StartContainer for \"ab179efbc051f060bb8adc11272f51d87868a3a84b42fdafa8667427d8764423\"" Sep 10 23:52:00.079895 containerd[1899]: time="2025-09-10T23:52:00.079742057Z" level=info msg="connecting to shim ab179efbc051f060bb8adc11272f51d87868a3a84b42fdafa8667427d8764423" address="unix:///run/containerd/s/5b05ec4aee15e5416bd5bd1c94c88c0ba506d2aa7c9e8bb07f17c66cca3e5419" protocol=ttrpc version=3 Sep 10 23:52:00.128982 systemd[1]: Started cri-containerd-ab179efbc051f060bb8adc11272f51d87868a3a84b42fdafa8667427d8764423.scope - libcontainer container ab179efbc051f060bb8adc11272f51d87868a3a84b42fdafa8667427d8764423. Sep 10 23:52:00.212041 containerd[1899]: time="2025-09-10T23:52:00.211994490Z" level=info msg="StartContainer for \"ab179efbc051f060bb8adc11272f51d87868a3a84b42fdafa8667427d8764423\" returns successfully" Sep 10 23:52:06.038764 update_engine[1867]: I20250910 23:52:06.038254 1867 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 10 23:52:06.038764 update_engine[1867]: I20250910 23:52:06.038599 1867 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 10 23:52:06.039359 update_engine[1867]: I20250910 23:52:06.039053 1867 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 10 23:52:06.060518 update_engine[1867]: E20250910 23:52:06.060433 1867 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 10 23:52:06.060679 update_engine[1867]: I20250910 23:52:06.060539 1867 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 10 23:52:06.060679 update_engine[1867]: I20250910 23:52:06.060559 1867 omaha_request_action.cc:617] Omaha request response: Sep 10 23:52:06.060822 update_engine[1867]: E20250910 23:52:06.060671 1867 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 10 23:52:06.060822 update_engine[1867]: I20250910 23:52:06.060742 1867 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 10 23:52:06.060822 update_engine[1867]: I20250910 23:52:06.060760 1867 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 10 23:52:06.060822 update_engine[1867]: I20250910 23:52:06.060773 1867 update_attempter.cc:306] Processing Done. Sep 10 23:52:06.060822 update_engine[1867]: E20250910 23:52:06.060798 1867 update_attempter.cc:619] Update failed. Sep 10 23:52:06.060822 update_engine[1867]: I20250910 23:52:06.060813 1867 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 10 23:52:06.061085 update_engine[1867]: I20250910 23:52:06.060827 1867 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 10 23:52:06.061085 update_engine[1867]: I20250910 23:52:06.060842 1867 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 10 23:52:06.061210 update_engine[1867]: I20250910 23:52:06.061184 1867 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 10 23:52:06.061266 update_engine[1867]: I20250910 23:52:06.061231 1867 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 10 23:52:06.061266 update_engine[1867]: I20250910 23:52:06.061249 1867 omaha_request_action.cc:272] Request: Sep 10 23:52:06.061266 update_engine[1867]: Sep 10 23:52:06.061266 update_engine[1867]: Sep 10 23:52:06.061266 update_engine[1867]: Sep 10 23:52:06.061266 update_engine[1867]: Sep 10 23:52:06.061266 update_engine[1867]: Sep 10 23:52:06.061266 update_engine[1867]: Sep 10 23:52:06.061586 update_engine[1867]: I20250910 23:52:06.061264 1867 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 10 23:52:06.061586 update_engine[1867]: I20250910 23:52:06.061539 1867 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 10 23:52:06.061897 locksmithd[1910]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 10 23:52:06.062359 update_engine[1867]: I20250910 23:52:06.062175 1867 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 10 23:52:06.063339 update_engine[1867]: E20250910 23:52:06.063279 1867 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 10 23:52:06.063433 update_engine[1867]: I20250910 23:52:06.063361 1867 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 10 23:52:06.063433 update_engine[1867]: I20250910 23:52:06.063381 1867 omaha_request_action.cc:617] Omaha request response: Sep 10 23:52:06.063433 update_engine[1867]: I20250910 23:52:06.063397 1867 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 10 23:52:06.063433 update_engine[1867]: I20250910 23:52:06.063411 1867 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 10 23:52:06.063433 update_engine[1867]: I20250910 23:52:06.063424 1867 update_attempter.cc:306] Processing Done. Sep 10 23:52:06.063653 update_engine[1867]: I20250910 23:52:06.063439 1867 update_attempter.cc:310] Error event sent. Sep 10 23:52:06.063653 update_engine[1867]: I20250910 23:52:06.063458 1867 update_check_scheduler.cc:74] Next update check in 45m45s Sep 10 23:52:06.064016 locksmithd[1910]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 10 23:52:08.591043 kubelet[3434]: E0910 23:52:08.590532 3434 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-159?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"