Jan 23 17:56:18.174746 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 17:56:18.174790 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:56:18.174817 kernel: KASLR disabled due to lack of seed Jan 23 17:56:18.174834 kernel: efi: EFI v2.7 by EDK II Jan 23 17:56:18.174850 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Jan 23 17:56:18.174866 kernel: secureboot: Secure boot disabled Jan 23 17:56:18.174884 kernel: ACPI: Early table checksum verification disabled Jan 23 17:56:18.174900 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 17:56:18.174916 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 17:56:18.174932 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 17:56:18.174947 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 17:56:18.174968 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 17:56:18.174984 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 17:56:18.175000 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 17:56:18.175018 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 17:56:18.175034 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 17:56:18.175055 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 17:56:18.175072 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 17:56:18.175112 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 17:56:18.175130 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 17:56:18.175147 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 17:56:18.175164 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 17:56:18.175181 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:56:18.175198 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:56:18.175215 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 17:56:18.175232 kernel: Zone ranges: Jan 23 17:56:18.175249 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:56:18.175271 kernel: DMA32 empty Jan 23 17:56:18.175288 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 17:56:18.175321 kernel: Device empty Jan 23 17:56:18.175340 kernel: Movable zone start for each node Jan 23 17:56:18.175357 kernel: Early memory node ranges Jan 23 17:56:18.175385 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 17:56:18.175403 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 17:56:18.175428 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 17:56:18.175455 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 17:56:18.175491 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 17:56:18.175531 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 17:56:18.175561 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 17:56:18.175603 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 17:56:18.175647 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:56:18.175684 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 17:56:18.175713 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 17:56:18.175761 kernel: psci: probing for conduit method from ACPI. Jan 23 17:56:18.175798 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 17:56:18.175817 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:56:18.175845 kernel: psci: Trusted OS migration not required Jan 23 17:56:18.175873 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:56:18.175910 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 17:56:18.175947 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:56:18.175977 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:56:18.176014 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:56:18.176044 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:56:18.180666 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:56:18.180716 kernel: CPU features: detected: Spectre-v2 Jan 23 17:56:18.180745 kernel: CPU features: detected: Spectre-v3a Jan 23 17:56:18.180763 kernel: CPU features: detected: Spectre-BHB Jan 23 17:56:18.180780 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 17:56:18.180798 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 17:56:18.180815 kernel: alternatives: applying boot alternatives Jan 23 17:56:18.180835 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:56:18.180855 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:56:18.180873 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:56:18.180890 kernel: Fallback order for Node 0: 0 Jan 23 17:56:18.180907 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 17:56:18.180924 kernel: Policy zone: Normal Jan 23 17:56:18.180946 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:56:18.180963 kernel: software IO TLB: area num 2. Jan 23 17:56:18.180981 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jan 23 17:56:18.180998 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:56:18.181016 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:56:18.181034 kernel: rcu: RCU event tracing is enabled. Jan 23 17:56:18.181052 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:56:18.181069 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:56:18.181107 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:56:18.181127 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:56:18.181145 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:56:18.181168 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:56:18.181186 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:56:18.181204 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:56:18.181221 kernel: GICv3: 96 SPIs implemented Jan 23 17:56:18.181239 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:56:18.181257 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:56:18.181274 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 17:56:18.181291 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:56:18.181309 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 17:56:18.181326 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 17:56:18.181344 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:56:18.181362 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:56:18.181384 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 17:56:18.181401 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 17:56:18.181419 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 17:56:18.181437 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:56:18.181454 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 17:56:18.181471 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 17:56:18.181489 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 17:56:18.181506 kernel: Console: colour dummy device 80x25 Jan 23 17:56:18.181524 kernel: printk: legacy console [tty1] enabled Jan 23 17:56:18.181542 kernel: ACPI: Core revision 20240827 Jan 23 17:56:18.181560 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 17:56:18.181583 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:56:18.181601 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:56:18.181618 kernel: landlock: Up and running. Jan 23 17:56:18.181636 kernel: SELinux: Initializing. Jan 23 17:56:18.181653 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:56:18.181671 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:56:18.181689 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:56:18.181745 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:56:18.181770 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:56:18.181788 kernel: Remapping and enabling EFI services. Jan 23 17:56:18.181806 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:56:18.181823 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:56:18.181841 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 17:56:18.181858 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 17:56:18.181876 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 17:56:18.181893 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:56:18.181911 kernel: SMP: Total of 2 processors activated. Jan 23 17:56:18.181932 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:56:18.181961 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:56:18.181979 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 17:56:18.182001 kernel: CPU features: detected: CRC32 instructions Jan 23 17:56:18.182019 kernel: alternatives: applying system-wide alternatives Jan 23 17:56:18.182038 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 23 17:56:18.182057 kernel: devtmpfs: initialized Jan 23 17:56:18.182076 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:56:18.182125 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:56:18.182144 kernel: 16880 pages in range for non-PLT usage Jan 23 17:56:18.182162 kernel: 508400 pages in range for PLT usage Jan 23 17:56:18.182180 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:56:18.182198 kernel: SMBIOS 3.0.0 present. Jan 23 17:56:18.182218 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 17:56:18.182236 kernel: DMI: Memory slots populated: 0/0 Jan 23 17:56:18.182254 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:56:18.182273 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:56:18.182296 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:56:18.182315 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:56:18.182333 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:56:18.182351 kernel: audit: type=2000 audit(0.230:1): state=initialized audit_enabled=0 res=1 Jan 23 17:56:18.182369 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:56:18.182387 kernel: cpuidle: using governor menu Jan 23 17:56:18.182406 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:56:18.182424 kernel: ASID allocator initialised with 65536 entries Jan 23 17:56:18.182442 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:56:18.182465 kernel: Serial: AMBA PL011 UART driver Jan 23 17:56:18.182483 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:56:18.182502 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:56:18.182520 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:56:18.182538 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:56:18.182557 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:56:18.182575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:56:18.182594 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:56:18.182612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:56:18.182634 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:56:18.182653 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:56:18.182671 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:56:18.182689 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:56:18.182708 kernel: ACPI: Interpreter enabled Jan 23 17:56:18.182726 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:56:18.182744 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:56:18.182762 kernel: ACPI: CPU0 has been hot-added Jan 23 17:56:18.182781 kernel: ACPI: CPU1 has been hot-added Jan 23 17:56:18.182803 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 17:56:18.184141 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:56:18.184398 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:56:18.184585 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:56:18.184770 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 17:56:18.184958 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 17:56:18.184985 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 17:56:18.185031 kernel: acpiphp: Slot [1] registered Jan 23 17:56:18.185052 kernel: acpiphp: Slot [2] registered Jan 23 17:56:18.185071 kernel: acpiphp: Slot [3] registered Jan 23 17:56:18.187169 kernel: acpiphp: Slot [4] registered Jan 23 17:56:18.187199 kernel: acpiphp: Slot [5] registered Jan 23 17:56:18.187219 kernel: acpiphp: Slot [6] registered Jan 23 17:56:18.187238 kernel: acpiphp: Slot [7] registered Jan 23 17:56:18.187259 kernel: acpiphp: Slot [8] registered Jan 23 17:56:18.187278 kernel: acpiphp: Slot [9] registered Jan 23 17:56:18.187297 kernel: acpiphp: Slot [10] registered Jan 23 17:56:18.187327 kernel: acpiphp: Slot [11] registered Jan 23 17:56:18.187347 kernel: acpiphp: Slot [12] registered Jan 23 17:56:18.187366 kernel: acpiphp: Slot [13] registered Jan 23 17:56:18.187385 kernel: acpiphp: Slot [14] registered Jan 23 17:56:18.187404 kernel: acpiphp: Slot [15] registered Jan 23 17:56:18.187423 kernel: acpiphp: Slot [16] registered Jan 23 17:56:18.187441 kernel: acpiphp: Slot [17] registered Jan 23 17:56:18.187460 kernel: acpiphp: Slot [18] registered Jan 23 17:56:18.187478 kernel: acpiphp: Slot [19] registered Jan 23 17:56:18.187500 kernel: acpiphp: Slot [20] registered Jan 23 17:56:18.187518 kernel: acpiphp: Slot [21] registered Jan 23 17:56:18.187536 kernel: acpiphp: Slot [22] registered Jan 23 17:56:18.187555 kernel: acpiphp: Slot [23] registered Jan 23 17:56:18.187573 kernel: acpiphp: Slot [24] registered Jan 23 17:56:18.187591 kernel: acpiphp: Slot [25] registered Jan 23 17:56:18.187610 kernel: acpiphp: Slot [26] registered Jan 23 17:56:18.187628 kernel: acpiphp: Slot [27] registered Jan 23 17:56:18.187646 kernel: acpiphp: Slot [28] registered Jan 23 17:56:18.187664 kernel: acpiphp: Slot [29] registered Jan 23 17:56:18.187686 kernel: acpiphp: Slot [30] registered Jan 23 17:56:18.187705 kernel: acpiphp: Slot [31] registered Jan 23 17:56:18.187742 kernel: PCI host bridge to bus 0000:00 Jan 23 17:56:18.188037 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 17:56:18.188280 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:56:18.196210 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 17:56:18.196526 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 17:56:18.196954 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:56:18.197365 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 17:56:18.197580 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 17:56:18.197827 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 17:56:18.198052 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 17:56:18.199341 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:56:18.199565 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 17:56:18.199778 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 17:56:18.199973 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 17:56:18.200193 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 17:56:18.200391 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:56:18.200566 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 17:56:18.200742 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:56:18.200919 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 17:56:18.200944 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:56:18.200963 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:56:18.200982 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:56:18.201000 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:56:18.201019 kernel: iommu: Default domain type: Translated Jan 23 17:56:18.201038 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:56:18.201056 kernel: efivars: Registered efivars operations Jan 23 17:56:18.201074 kernel: vgaarb: loaded Jan 23 17:56:18.201134 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:56:18.201154 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:56:18.201173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:56:18.201192 kernel: pnp: PnP ACPI init Jan 23 17:56:18.201405 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 17:56:18.201433 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:56:18.201451 kernel: NET: Registered PF_INET protocol family Jan 23 17:56:18.201470 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:56:18.201494 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:56:18.201512 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:56:18.201531 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:56:18.201550 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:56:18.201568 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:56:18.201586 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:56:18.201604 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:56:18.201623 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:56:18.201641 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:56:18.201662 kernel: kvm [1]: HYP mode not available Jan 23 17:56:18.201681 kernel: Initialise system trusted keyrings Jan 23 17:56:18.201699 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:56:18.201717 kernel: Key type asymmetric registered Jan 23 17:56:18.201735 kernel: Asymmetric key parser 'x509' registered Jan 23 17:56:18.201754 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:56:18.201772 kernel: io scheduler mq-deadline registered Jan 23 17:56:18.201790 kernel: io scheduler kyber registered Jan 23 17:56:18.201808 kernel: io scheduler bfq registered Jan 23 17:56:18.202017 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 17:56:18.202043 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:56:18.202062 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:56:18.202612 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 17:56:18.202638 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 17:56:18.202657 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:56:18.202677 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:56:18.202887 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 17:56:18.202919 kernel: printk: legacy console [ttyS0] disabled Jan 23 17:56:18.202938 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 17:56:18.202957 kernel: printk: legacy console [ttyS0] enabled Jan 23 17:56:18.202975 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 17:56:18.202993 kernel: thunder_xcv, ver 1.0 Jan 23 17:56:18.203011 kernel: thunder_bgx, ver 1.0 Jan 23 17:56:18.203029 kernel: nicpf, ver 1.0 Jan 23 17:56:18.203047 kernel: nicvf, ver 1.0 Jan 23 17:56:18.203350 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:56:18.203539 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:56:17 UTC (1769190977) Jan 23 17:56:18.203564 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:56:18.203584 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 17:56:18.203603 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:56:18.203621 kernel: watchdog: NMI not fully supported Jan 23 17:56:18.203639 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:56:18.203657 kernel: Segment Routing with IPv6 Jan 23 17:56:18.203675 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:56:18.203694 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:56:18.203717 kernel: Key type dns_resolver registered Jan 23 17:56:18.203759 kernel: registered taskstats version 1 Jan 23 17:56:18.203778 kernel: Loading compiled-in X.509 certificates Jan 23 17:56:18.203797 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:56:18.203815 kernel: Demotion targets for Node 0: null Jan 23 17:56:18.203833 kernel: Key type .fscrypt registered Jan 23 17:56:18.203851 kernel: Key type fscrypt-provisioning registered Jan 23 17:56:18.203869 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:56:18.203887 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:56:18.203911 kernel: ima: No architecture policies found Jan 23 17:56:18.203930 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:56:18.203948 kernel: clk: Disabling unused clocks Jan 23 17:56:18.203966 kernel: PM: genpd: Disabling unused power domains Jan 23 17:56:18.203984 kernel: Warning: unable to open an initial console. Jan 23 17:56:18.204002 kernel: Freeing unused kernel memory: 39552K Jan 23 17:56:18.204021 kernel: Run /init as init process Jan 23 17:56:18.204039 kernel: with arguments: Jan 23 17:56:18.204057 kernel: /init Jan 23 17:56:18.204117 kernel: with environment: Jan 23 17:56:18.204138 kernel: HOME=/ Jan 23 17:56:18.204158 kernel: TERM=linux Jan 23 17:56:18.204179 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:56:18.204205 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:56:18.204227 systemd[1]: Detected virtualization amazon. Jan 23 17:56:18.204247 systemd[1]: Detected architecture arm64. Jan 23 17:56:18.204274 systemd[1]: Running in initrd. Jan 23 17:56:18.204294 systemd[1]: No hostname configured, using default hostname. Jan 23 17:56:18.204314 systemd[1]: Hostname set to . Jan 23 17:56:18.204333 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:56:18.204352 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:56:18.204372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:18.204391 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:18.204411 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:56:18.204435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:56:18.204455 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:56:18.204476 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:56:18.204497 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:56:18.204517 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:56:18.204537 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:18.204556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:18.204579 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:56:18.204599 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:56:18.204618 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:56:18.204638 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:56:18.204657 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:56:18.204676 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:56:18.204697 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:56:18.204716 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:56:18.204737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:18.204761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:18.204781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:18.204801 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:56:18.204822 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:56:18.204843 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:56:18.204863 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:56:18.204885 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:56:18.204905 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:56:18.204930 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:56:18.204952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:56:18.204972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:18.204992 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:56:18.205014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:18.205040 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:56:18.205060 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:56:18.205163 systemd-journald[260]: Collecting audit messages is disabled. Jan 23 17:56:18.205216 kernel: Bridge firewalling registered Jan 23 17:56:18.205238 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:56:18.205260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:18.205280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:56:18.205301 systemd-journald[260]: Journal started Jan 23 17:56:18.205340 systemd-journald[260]: Runtime Journal (/run/log/journal/ec20923837ffe3bb90d58b634a320f12) is 8M, max 75.3M, 67.3M free. Jan 23 17:56:18.132677 systemd-modules-load[261]: Inserted module 'overlay' Jan 23 17:56:18.168974 systemd-modules-load[261]: Inserted module 'br_netfilter' Jan 23 17:56:18.222265 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:56:18.232377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:56:18.244540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:18.249140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:18.262300 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:56:18.270542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:56:18.285284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:56:18.291568 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:56:18.308250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:18.320646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:56:18.340196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:18.361127 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:56:18.369924 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:56:18.416302 dracut-cmdline[303]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:56:18.451834 systemd-resolved[291]: Positive Trust Anchors: Jan 23 17:56:18.452401 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:56:18.452464 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:56:18.614122 kernel: SCSI subsystem initialized Jan 23 17:56:18.622123 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:56:18.634124 kernel: iscsi: registered transport (tcp) Jan 23 17:56:18.657418 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:56:18.657503 kernel: QLogic iSCSI HBA Driver Jan 23 17:56:18.694283 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:56:18.724158 kernel: random: crng init done Jan 23 17:56:18.724485 systemd-resolved[291]: Defaulting to hostname 'linux'. Jan 23 17:56:18.727351 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:56:18.733282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:18.749596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:18.754224 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:56:18.837277 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:56:18.845717 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:56:18.929131 kernel: raid6: neonx8 gen() 6558 MB/s Jan 23 17:56:18.946122 kernel: raid6: neonx4 gen() 6571 MB/s Jan 23 17:56:18.963120 kernel: raid6: neonx2 gen() 5433 MB/s Jan 23 17:56:18.980125 kernel: raid6: neonx1 gen() 3875 MB/s Jan 23 17:56:18.997122 kernel: raid6: int64x8 gen() 3626 MB/s Jan 23 17:56:19.014122 kernel: raid6: int64x4 gen() 3657 MB/s Jan 23 17:56:19.031127 kernel: raid6: int64x2 gen() 3342 MB/s Jan 23 17:56:19.049206 kernel: raid6: int64x1 gen() 2762 MB/s Jan 23 17:56:19.049245 kernel: raid6: using algorithm neonx4 gen() 6571 MB/s Jan 23 17:56:19.068247 kernel: raid6: .... xor() 4448 MB/s, rmw enabled Jan 23 17:56:19.068288 kernel: raid6: using neon recovery algorithm Jan 23 17:56:19.076127 kernel: xor: measuring software checksum speed Jan 23 17:56:19.078845 kernel: 8regs : 9601 MB/sec Jan 23 17:56:19.078881 kernel: 32regs : 10512 MB/sec Jan 23 17:56:19.081807 kernel: arm64_neon : 8633 MB/sec Jan 23 17:56:19.081844 kernel: xor: using function: 32regs (10512 MB/sec) Jan 23 17:56:19.183152 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:56:19.196037 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:56:19.208801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:19.276217 systemd-udevd[510]: Using default interface naming scheme 'v255'. Jan 23 17:56:19.286689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:19.294919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:56:19.335780 dracut-pre-trigger[516]: rd.md=0: removing MD RAID activation Jan 23 17:56:19.381681 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:56:19.384283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:56:19.513542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:19.516220 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:56:19.696116 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:56:19.696195 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 17:56:19.699901 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:56:19.699946 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 17:56:19.711122 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 17:56:19.711456 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 17:56:19.711694 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 17:56:19.712385 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:56:19.713493 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:19.725286 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:56:19.725342 kernel: GPT:9289727 != 33554431 Jan 23 17:56:19.725367 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:56:19.726205 kernel: GPT:9289727 != 33554431 Jan 23 17:56:19.727369 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:56:19.728388 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:56:19.728572 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:19.748246 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:c1:de:15:e7:51 Jan 23 17:56:19.736772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:19.744521 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:56:19.757497 (udev-worker)[581]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:56:19.791288 kernel: nvme nvme0: using unchecked data buffer Jan 23 17:56:19.809585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:19.913306 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 17:56:19.953996 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 17:56:20.002871 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:56:20.046913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:56:20.065868 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 17:56:20.072258 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 17:56:20.075501 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:56:20.087165 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:20.092906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:56:20.098970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:56:20.104045 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:56:20.133379 disk-uuid[692]: Primary Header is updated. Jan 23 17:56:20.133379 disk-uuid[692]: Secondary Entries is updated. Jan 23 17:56:20.133379 disk-uuid[692]: Secondary Header is updated. Jan 23 17:56:20.146110 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:56:20.153625 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:56:21.175152 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:56:21.184719 disk-uuid[695]: The operation has completed successfully. Jan 23 17:56:21.412350 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:56:21.415886 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:56:21.523517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:56:21.550795 sh[960]: Success Jan 23 17:56:21.580229 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:56:21.580335 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:56:21.580379 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:56:21.600124 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:56:21.714354 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:56:21.718920 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:56:21.737230 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:56:21.756112 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (983) Jan 23 17:56:21.760700 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:56:21.760751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:21.921712 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:56:21.921786 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:56:21.923095 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:56:21.950923 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:56:21.958257 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:56:21.964073 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:56:21.973356 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:56:21.980254 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:56:22.028120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1008) Jan 23 17:56:22.032758 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:22.033279 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:22.042421 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:56:22.042494 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:56:22.051155 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:22.054246 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:56:22.060168 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:56:22.178020 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:56:22.187510 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:56:22.259155 systemd-networkd[1167]: lo: Link UP Jan 23 17:56:22.259178 systemd-networkd[1167]: lo: Gained carrier Jan 23 17:56:22.265316 systemd-networkd[1167]: Enumeration completed Jan 23 17:56:22.265745 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:56:22.266570 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:22.266577 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:56:22.272746 systemd[1]: Reached target network.target - Network. Jan 23 17:56:22.273981 systemd-networkd[1167]: eth0: Link UP Jan 23 17:56:22.273989 systemd-networkd[1167]: eth0: Gained carrier Jan 23 17:56:22.274012 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:22.315164 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.24.80/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:56:22.649775 ignition[1061]: Ignition 2.22.0 Jan 23 17:56:22.650144 ignition[1061]: Stage: fetch-offline Jan 23 17:56:22.651027 ignition[1061]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:22.651050 ignition[1061]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:22.652143 ignition[1061]: Ignition finished successfully Jan 23 17:56:22.664190 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:56:22.672552 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:56:22.722625 ignition[1177]: Ignition 2.22.0 Jan 23 17:56:22.722657 ignition[1177]: Stage: fetch Jan 23 17:56:22.723234 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:22.723258 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:22.723407 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:22.739301 ignition[1177]: PUT result: OK Jan 23 17:56:22.742583 ignition[1177]: parsed url from cmdline: "" Jan 23 17:56:22.742600 ignition[1177]: no config URL provided Jan 23 17:56:22.742615 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:56:22.742639 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:56:22.742669 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:22.750327 ignition[1177]: PUT result: OK Jan 23 17:56:22.750408 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 17:56:22.753729 ignition[1177]: GET result: OK Jan 23 17:56:22.753884 ignition[1177]: parsing config with SHA512: bc14fe124fa278d71607ed219ef56e53ed1dbea7b2fb8ccff7cca754704fe325fdba0f6e2cc495ce10baa0899e1b32a802720aa4494fd404662e853dd34537d0 Jan 23 17:56:22.769860 unknown[1177]: fetched base config from "system" Jan 23 17:56:22.770331 unknown[1177]: fetched base config from "system" Jan 23 17:56:22.771793 ignition[1177]: fetch: fetch complete Jan 23 17:56:22.770346 unknown[1177]: fetched user config from "aws" Jan 23 17:56:22.771830 ignition[1177]: fetch: fetch passed Jan 23 17:56:22.772666 ignition[1177]: Ignition finished successfully Jan 23 17:56:22.787199 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:56:22.795333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:56:22.849773 ignition[1183]: Ignition 2.22.0 Jan 23 17:56:22.850340 ignition[1183]: Stage: kargs Jan 23 17:56:22.851316 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:22.851338 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:22.851480 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:22.861982 ignition[1183]: PUT result: OK Jan 23 17:56:22.870842 ignition[1183]: kargs: kargs passed Jan 23 17:56:22.870950 ignition[1183]: Ignition finished successfully Jan 23 17:56:22.877603 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:56:22.884463 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:56:22.934137 ignition[1189]: Ignition 2.22.0 Jan 23 17:56:22.934161 ignition[1189]: Stage: disks Jan 23 17:56:22.934660 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:22.934682 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:22.934811 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:22.944660 ignition[1189]: PUT result: OK Jan 23 17:56:22.951920 ignition[1189]: disks: disks passed Jan 23 17:56:22.952301 ignition[1189]: Ignition finished successfully Jan 23 17:56:22.958404 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:56:22.965675 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:56:22.965837 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:56:22.974601 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:56:22.980236 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:56:22.982994 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:56:22.986789 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:56:23.049175 systemd-fsck[1197]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 17:56:23.056209 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:56:23.062204 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:56:23.207111 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:56:23.208604 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:56:23.211523 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:56:23.221018 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:56:23.236600 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:56:23.244758 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:56:23.245017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:56:23.245070 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:56:23.267181 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:56:23.273810 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:56:23.286162 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1216) Jan 23 17:56:23.292897 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:23.292968 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:23.299967 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:56:23.300034 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:56:23.302389 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:56:23.649143 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:56:23.672112 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:56:23.690752 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:56:23.712111 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:56:24.058396 systemd-networkd[1167]: eth0: Gained IPv6LL Jan 23 17:56:24.091356 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:56:24.096850 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:56:24.108554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:56:24.136313 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:24.135569 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:56:24.171180 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:56:24.196551 ignition[1329]: INFO : Ignition 2.22.0 Jan 23 17:56:24.196551 ignition[1329]: INFO : Stage: mount Jan 23 17:56:24.200707 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:24.203955 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:24.203955 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:24.213115 ignition[1329]: INFO : PUT result: OK Jan 23 17:56:24.217036 ignition[1329]: INFO : mount: mount passed Jan 23 17:56:24.220977 ignition[1329]: INFO : Ignition finished successfully Jan 23 17:56:24.223442 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:56:24.231419 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:56:24.264847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:56:24.309110 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1340) Jan 23 17:56:24.314059 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:24.314153 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:24.321473 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:56:24.321555 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:56:24.325634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:56:24.387054 ignition[1357]: INFO : Ignition 2.22.0 Jan 23 17:56:24.387054 ignition[1357]: INFO : Stage: files Jan 23 17:56:24.391862 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:24.391862 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:24.391862 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:24.401058 ignition[1357]: INFO : PUT result: OK Jan 23 17:56:24.405435 ignition[1357]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:56:24.408894 ignition[1357]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:56:24.408894 ignition[1357]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:56:24.429585 ignition[1357]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:56:24.434156 ignition[1357]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:56:24.434156 ignition[1357]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:56:24.430387 unknown[1357]: wrote ssh authorized keys file for user: core Jan 23 17:56:24.444671 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:56:24.444671 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:56:24.511608 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:56:24.679266 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:56:24.684413 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:56:24.684413 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 17:56:24.939884 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 17:56:25.070624 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:56:25.070624 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:56:25.079552 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:56:25.079552 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:56:25.079552 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:56:25.079552 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:56:25.079552 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:56:25.079552 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:56:25.079552 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:56:25.114508 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:56:25.114508 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:56:25.114508 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:56:25.114508 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:56:25.114508 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:56:25.114508 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 17:56:25.542168 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 17:56:25.851659 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:56:25.851659 ignition[1357]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 17:56:25.865357 ignition[1357]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:56:25.870716 ignition[1357]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:56:25.870716 ignition[1357]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 17:56:25.870716 ignition[1357]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:56:25.870716 ignition[1357]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:56:25.870716 ignition[1357]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:56:25.890952 ignition[1357]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:56:25.890952 ignition[1357]: INFO : files: files passed Jan 23 17:56:25.890952 ignition[1357]: INFO : Ignition finished successfully Jan 23 17:56:25.902874 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:56:25.909869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:56:25.914784 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:56:25.945633 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:56:25.951859 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:56:25.962651 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:25.962651 initrd-setup-root-after-ignition[1387]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:25.971487 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:25.977446 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:56:25.984151 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:56:25.991108 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:56:26.068326 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:56:26.068718 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:56:26.080677 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:56:26.083532 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:56:26.091960 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:56:26.093478 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:56:26.144801 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:56:26.155520 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:56:26.193843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:26.197489 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:26.206037 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:56:26.210807 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:56:26.211044 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:56:26.219987 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:56:26.222761 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:56:26.230376 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:56:26.233300 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:56:26.241757 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:56:26.244703 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:56:26.252943 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:56:26.255675 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:56:26.264844 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:56:26.267559 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:56:26.275443 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:56:26.277911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:56:26.278186 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:56:26.287912 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:26.291246 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:26.299720 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:56:26.300070 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:26.309960 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:56:26.310251 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:56:26.318534 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:56:26.318897 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:56:26.325166 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:56:26.325453 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:56:26.331286 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:56:26.339537 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:56:26.344278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:56:26.344667 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:26.350559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:56:26.350858 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:56:26.377219 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:56:26.382263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:56:26.417473 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:56:26.430951 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:56:26.431474 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:56:26.441066 ignition[1411]: INFO : Ignition 2.22.0 Jan 23 17:56:26.443318 ignition[1411]: INFO : Stage: umount Jan 23 17:56:26.443318 ignition[1411]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:26.443318 ignition[1411]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:26.443318 ignition[1411]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:26.455968 ignition[1411]: INFO : PUT result: OK Jan 23 17:56:26.461829 ignition[1411]: INFO : umount: umount passed Jan 23 17:56:26.464308 ignition[1411]: INFO : Ignition finished successfully Jan 23 17:56:26.469328 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:56:26.469605 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:56:26.475068 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:56:26.475181 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:56:26.480121 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:56:26.480208 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:56:26.485041 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:56:26.485156 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:56:26.488955 systemd[1]: Stopped target network.target - Network. Jan 23 17:56:26.492876 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:56:26.492976 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:56:26.496011 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:56:26.498352 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:56:26.505336 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:26.508765 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:56:26.511050 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:56:26.518462 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:56:26.518544 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:56:26.521911 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:56:26.521982 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:56:26.529385 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:56:26.529491 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:56:26.532296 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:56:26.532373 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:56:26.539975 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:56:26.540104 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:56:26.545676 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:56:26.548811 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:56:26.574492 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:56:26.574684 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:56:26.581059 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:56:26.582022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:56:26.582734 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:26.591149 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:56:26.591674 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:56:26.591897 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:56:26.603973 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:56:26.605047 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:56:26.613207 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:56:26.613309 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:26.626206 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:56:26.634035 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:56:26.635054 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:56:26.643058 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:56:26.643206 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:26.662625 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:56:26.662727 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:26.668009 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:26.680153 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:56:26.730377 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:56:26.730770 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:56:26.740817 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:56:26.741343 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:26.752256 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:56:26.752366 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:26.755543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:56:26.755607 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:26.759431 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:56:26.759522 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:56:26.764911 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:56:26.765001 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:56:26.789514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:56:26.789629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:56:26.800864 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:56:26.807326 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:56:26.807548 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:26.820485 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:56:26.820802 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:26.831067 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 17:56:26.831390 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:56:26.845328 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:56:26.845645 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:26.854831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:56:26.854935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:26.867914 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:56:26.868333 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:56:26.878303 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:56:26.885700 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:56:26.917047 systemd[1]: Switching root. Jan 23 17:56:27.020939 systemd-journald[260]: Journal stopped Jan 23 17:56:29.992570 systemd-journald[260]: Received SIGTERM from PID 1 (systemd). Jan 23 17:56:29.992702 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:56:29.992748 kernel: SELinux: policy capability open_perms=1 Jan 23 17:56:29.992779 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:56:29.992816 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:56:29.992847 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:56:29.992877 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:56:29.992912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:56:29.992948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:56:29.992977 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:56:29.993003 kernel: audit: type=1403 audit(1769190987.630:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:56:29.993034 systemd[1]: Successfully loaded SELinux policy in 127.831ms. Jan 23 17:56:29.993076 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.240ms. Jan 23 17:56:29.993134 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:56:29.993168 systemd[1]: Detected virtualization amazon. Jan 23 17:56:29.993199 systemd[1]: Detected architecture arm64. Jan 23 17:56:29.993229 systemd[1]: Detected first boot. Jan 23 17:56:29.993264 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:56:29.993296 zram_generator::config[1454]: No configuration found. Jan 23 17:56:29.993325 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:56:29.993352 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:56:29.993384 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:56:29.993414 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:56:29.993445 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:56:29.993475 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:29.993505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:56:29.993538 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:56:29.993569 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:56:29.993599 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:56:29.993630 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:56:29.993660 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:56:29.993692 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:56:29.993722 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:56:29.993753 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:29.993785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:29.993814 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:56:29.993841 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:56:29.993869 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:56:29.993901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:56:29.993930 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 17:56:29.993960 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:29.993992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:29.994023 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:56:29.994054 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:56:30.010925 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:56:30.010992 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:56:30.011025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:30.011056 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:56:30.011109 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:56:30.011144 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:56:30.011176 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:56:30.011222 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:56:30.011256 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:56:30.011284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:30.011314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:30.011342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:30.011369 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:56:30.011399 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:56:30.011430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:56:30.011460 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:56:30.011492 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:56:30.011522 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:56:30.011559 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:56:30.011588 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:56:30.011616 systemd[1]: Reached target machines.target - Containers. Jan 23 17:56:30.011644 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:56:30.011674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:30.011761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:56:30.011801 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:56:30.011832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:56:30.011860 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:56:30.011889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:56:30.011917 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:56:30.011945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:56:30.011973 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:56:30.012001 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:56:30.012029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:56:30.012069 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:56:30.032033 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:56:30.032149 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:30.032188 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:56:30.032218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:56:30.032247 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:56:30.032932 kernel: fuse: init (API version 7.41) Jan 23 17:56:30.032970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:56:30.033000 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:56:30.033042 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:56:30.033074 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:56:30.033583 systemd[1]: Stopped verity-setup.service. Jan 23 17:56:30.033619 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:56:30.033654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:56:30.033686 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:56:30.033720 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:56:30.033748 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:56:30.038157 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:56:30.038226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:30.038265 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:56:30.038298 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:56:30.038328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:56:30.038356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:56:30.038385 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:56:30.038413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:56:30.038442 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:30.038473 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:56:30.038502 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:56:30.038535 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:56:30.038564 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:56:30.038594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:56:30.038623 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:56:30.038653 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:56:30.038681 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:56:30.038710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:30.038738 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:56:30.038770 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:56:30.038801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:56:30.038830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:56:30.038858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:56:30.038887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:56:30.038918 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:30.038950 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:56:30.038979 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:56:30.039012 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:56:30.039041 kernel: ACPI: bus type drm_connector registered Jan 23 17:56:30.039070 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:56:30.049668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:56:30.049760 systemd-journald[1530]: Collecting audit messages is disabled. Jan 23 17:56:30.049824 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:56:30.049855 kernel: loop0: detected capacity change from 0 to 119840 Jan 23 17:56:30.049884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:56:30.049914 kernel: loop: module loaded Jan 23 17:56:30.049945 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:56:30.049973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:56:30.050002 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:56:30.050031 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:56:30.050063 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:56:30.050129 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:56:30.050161 systemd-journald[1530]: Journal started Jan 23 17:56:30.050207 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec20923837ffe3bb90d58b634a320f12) is 8M, max 75.3M, 67.3M free. Jan 23 17:56:29.150279 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:56:29.165903 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 17:56:29.166747 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:56:30.071199 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:56:30.068537 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:56:30.135329 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:56:30.135737 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec20923837ffe3bb90d58b634a320f12 is 188.039ms for 931 entries. Jan 23 17:56:30.135737 systemd-journald[1530]: System Journal (/var/log/journal/ec20923837ffe3bb90d58b634a320f12) is 8M, max 195.6M, 187.6M free. Jan 23 17:56:30.362257 systemd-journald[1530]: Received client request to flush runtime journal. Jan 23 17:56:30.362353 kernel: loop1: detected capacity change from 0 to 100632 Jan 23 17:56:30.198251 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Jan 23 17:56:30.198275 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Jan 23 17:56:30.213805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:30.219320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:56:30.248847 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:56:30.256929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:56:30.317196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:30.368204 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:56:30.377315 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:56:30.379989 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:56:30.393138 kernel: loop2: detected capacity change from 0 to 61264 Jan 23 17:56:30.437883 kernel: loop3: detected capacity change from 0 to 200800 Jan 23 17:56:30.437699 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:56:30.447654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:56:30.503175 kernel: loop4: detected capacity change from 0 to 119840 Jan 23 17:56:30.506887 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jan 23 17:56:30.507630 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jan 23 17:56:30.522224 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:30.527272 kernel: loop5: detected capacity change from 0 to 100632 Jan 23 17:56:30.543180 kernel: loop6: detected capacity change from 0 to 61264 Jan 23 17:56:30.560177 kernel: loop7: detected capacity change from 0 to 200800 Jan 23 17:56:30.588502 (sd-merge)[1620]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 17:56:30.589461 (sd-merge)[1620]: Merged extensions into '/usr'. Jan 23 17:56:30.597824 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:56:30.597851 systemd[1]: Reloading... Jan 23 17:56:30.792127 zram_generator::config[1648]: No configuration found. Jan 23 17:56:31.281267 systemd[1]: Reloading finished in 682 ms. Jan 23 17:56:31.312283 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:56:31.320171 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:56:31.336283 systemd[1]: Starting ensure-sysext.service... Jan 23 17:56:31.342335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:56:31.353502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:31.386367 systemd[1]: Reload requested from client PID 1700 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:56:31.386392 systemd[1]: Reloading... Jan 23 17:56:31.416539 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:56:31.416632 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:56:31.419353 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:56:31.424368 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:56:31.434718 systemd-tmpfiles[1701]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:56:31.435379 systemd-tmpfiles[1701]: ACLs are not supported, ignoring. Jan 23 17:56:31.435533 systemd-tmpfiles[1701]: ACLs are not supported, ignoring. Jan 23 17:56:31.448917 systemd-udevd[1702]: Using default interface naming scheme 'v255'. Jan 23 17:56:31.456129 systemd-tmpfiles[1701]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:56:31.456158 systemd-tmpfiles[1701]: Skipping /boot Jan 23 17:56:31.480761 systemd-tmpfiles[1701]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:56:31.480793 systemd-tmpfiles[1701]: Skipping /boot Jan 23 17:56:31.636305 zram_generator::config[1739]: No configuration found. Jan 23 17:56:31.871006 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:56:31.993357 (udev-worker)[1759]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:56:32.229378 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 17:56:32.230254 systemd[1]: Reloading finished in 842 ms. Jan 23 17:56:32.283913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:32.292170 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:56:32.298020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:32.378578 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:56:32.387454 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:56:32.395456 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:56:32.402414 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:56:32.416418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:56:32.427576 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:56:32.452154 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:56:32.461166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:32.463841 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:56:32.470348 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:56:32.479632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:56:32.482496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:32.482742 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:32.489297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:32.489625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:32.489800 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:32.499304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:32.518257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:56:32.521455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:32.521706 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:32.522050 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:56:32.537825 systemd[1]: Finished ensure-sysext.service. Jan 23 17:56:32.562270 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:56:32.579723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:32.645743 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:56:32.649942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:56:32.655416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:56:32.670455 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:56:32.703405 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:56:32.704224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:56:32.710189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:56:32.710576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:56:32.714597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:56:32.714697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:56:32.737310 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:56:32.742891 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:56:32.761611 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:56:32.763979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:56:32.804219 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:56:32.817772 augenrules[1914]: No rules Jan 23 17:56:32.833402 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:56:32.834148 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:56:33.022243 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:56:33.054476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:56:33.063503 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:56:33.123955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:56:33.154141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:33.201131 systemd-networkd[1840]: lo: Link UP Jan 23 17:56:33.201637 systemd-networkd[1840]: lo: Gained carrier Jan 23 17:56:33.202999 systemd-resolved[1841]: Positive Trust Anchors: Jan 23 17:56:33.203035 systemd-resolved[1841]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:56:33.203119 systemd-resolved[1841]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:56:33.205332 systemd-networkd[1840]: Enumeration completed Jan 23 17:56:33.205507 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:56:33.209571 systemd-networkd[1840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:33.211226 systemd-networkd[1840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:56:33.212916 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:56:33.220437 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:56:33.226449 systemd-networkd[1840]: eth0: Link UP Jan 23 17:56:33.226724 systemd-networkd[1840]: eth0: Gained carrier Jan 23 17:56:33.226760 systemd-networkd[1840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:33.235506 systemd-resolved[1841]: Defaulting to hostname 'linux'. Jan 23 17:56:33.238420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:56:33.241670 systemd[1]: Reached target network.target - Network. Jan 23 17:56:33.244207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:33.245440 systemd-networkd[1840]: eth0: DHCPv4 address 172.31.24.80/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:56:33.249932 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:56:33.254411 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:56:33.259153 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:56:33.262524 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:56:33.266669 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:56:33.270245 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:56:33.273632 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:56:33.273820 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:56:33.276463 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:56:33.282415 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:56:33.288714 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:56:33.296292 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:56:33.300138 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:56:33.303357 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:56:33.310593 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:56:33.314171 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:56:33.319001 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:56:33.323197 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:56:33.326895 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:56:33.329720 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:56:33.332710 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:56:33.332800 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:56:33.338360 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:56:33.345441 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:56:33.357257 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:56:33.364145 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:56:33.378264 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:56:33.393076 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:56:33.396200 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:56:33.400477 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:56:33.410611 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:56:33.416553 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:56:33.424515 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 17:56:33.445020 jq[1988]: false Jan 23 17:56:33.442496 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:56:33.455066 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:56:33.477391 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:56:33.484216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:56:33.485179 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:56:33.492521 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:56:33.499507 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:56:33.518213 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:56:33.533284 extend-filesystems[1989]: Found /dev/nvme0n1p6 Jan 23 17:56:33.522980 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:56:33.523410 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:56:33.562927 extend-filesystems[1989]: Found /dev/nvme0n1p9 Jan 23 17:56:33.598005 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:56:33.608443 extend-filesystems[1989]: Checking size of /dev/nvme0n1p9 Jan 23 17:56:33.628467 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:56:33.629692 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:56:33.641726 tar[2005]: linux-arm64/LICENSE Jan 23 17:56:33.644244 tar[2005]: linux-arm64/helm Jan 23 17:56:33.662052 jq[2003]: true Jan 23 17:56:33.662469 extend-filesystems[1989]: Resized partition /dev/nvme0n1p9 Jan 23 17:56:33.663391 (ntainerd)[2025]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:56:33.688504 extend-filesystems[2033]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:56:33.717647 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:56:33.722278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:56:33.746778 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 17:56:33.748704 coreos-metadata[1985]: Jan 23 17:56:33.742 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:56:33.754443 coreos-metadata[1985]: Jan 23 17:56:33.754 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 17:56:33.764004 coreos-metadata[1985]: Jan 23 17:56:33.763 INFO Fetch successful Jan 23 17:56:33.764004 coreos-metadata[1985]: Jan 23 17:56:33.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 17:56:33.773498 coreos-metadata[1985]: Jan 23 17:56:33.773 INFO Fetch successful Jan 23 17:56:33.773498 coreos-metadata[1985]: Jan 23 17:56:33.773 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 17:56:33.775122 coreos-metadata[1985]: Jan 23 17:56:33.774 INFO Fetch successful Jan 23 17:56:33.775122 coreos-metadata[1985]: Jan 23 17:56:33.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 17:56:33.787617 dbus-daemon[1986]: [system] SELinux support is enabled Jan 23 17:56:33.790505 jq[2034]: true Jan 23 17:56:33.800067 coreos-metadata[1985]: Jan 23 17:56:33.790 INFO Fetch successful Jan 23 17:56:33.800067 coreos-metadata[1985]: Jan 23 17:56:33.790 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 17:56:33.800067 coreos-metadata[1985]: Jan 23 17:56:33.797 INFO Fetch failed with 404: resource not found Jan 23 17:56:33.800067 coreos-metadata[1985]: Jan 23 17:56:33.797 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 17:56:33.797343 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:56:33.814783 coreos-metadata[1985]: Jan 23 17:56:33.814 INFO Fetch successful Jan 23 17:56:33.814783 coreos-metadata[1985]: Jan 23 17:56:33.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 17:56:33.893855 coreos-metadata[1985]: Jan 23 17:56:33.815 INFO Fetch successful Jan 23 17:56:33.893855 coreos-metadata[1985]: Jan 23 17:56:33.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 17:56:33.893855 coreos-metadata[1985]: Jan 23 17:56:33.816 INFO Fetch successful Jan 23 17:56:33.893855 coreos-metadata[1985]: Jan 23 17:56:33.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 17:56:33.893855 coreos-metadata[1985]: Jan 23 17:56:33.828 INFO Fetch successful Jan 23 17:56:33.893855 coreos-metadata[1985]: Jan 23 17:56:33.828 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 17:56:33.893855 coreos-metadata[1985]: Jan 23 17:56:33.831 INFO Fetch successful Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: ---------------------------------------------------- Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: corporation. Support and training for ntp-4 are Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: available at https://www.nwtime.org/support Jan 23 17:56:33.895525 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: ---------------------------------------------------- Jan 23 17:56:33.821602 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:56:33.896375 update_engine[2002]: I20260123 17:56:33.835363 2002 main.cc:92] Flatcar Update Engine starting Jan 23 17:56:33.896375 update_engine[2002]: I20260123 17:56:33.879005 2002 update_check_scheduler.cc:74] Next update check in 5m3s Jan 23 17:56:33.827424 dbus-daemon[1986]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1840 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 17:56:33.821658 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:56:33.855054 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 17:56:33.826270 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:56:33.878399 ntpd[1991]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:33.826358 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:56:33.881761 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:33.898004 systemd-logind[2001]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:56:33.881789 ntpd[1991]: ---------------------------------------------------- Jan 23 17:56:33.898038 systemd-logind[2001]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 17:56:33.881806 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:33.898611 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 17:56:33.881845 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:33.900994 systemd-logind[2001]: New seat seat0. Jan 23 17:56:33.881864 ntpd[1991]: corporation. Support and training for ntp-4 are Jan 23 17:56:33.881880 ntpd[1991]: available at https://www.nwtime.org/support Jan 23 17:56:33.881896 ntpd[1991]: ---------------------------------------------------- Jan 23 17:56:33.908823 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: proto: precision = 0.096 usec (-23) Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: basedate set to 2026-01-11 Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: Listen normally on 3 eth0 172.31.24.80:123 Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: bind(21) AF_INET6 [fe80::4c1:deff:fe15:e751%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:56:33.946275 ntpd[1991]: 23 Jan 17:56:33 ntpd[1991]: unable to create socket on eth0 (5) for [fe80::4c1:deff:fe15:e751%2]:123 Jan 23 17:56:33.911416 ntpd[1991]: proto: precision = 0.096 usec (-23) Jan 23 17:56:33.927041 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:56:33.916665 ntpd[1991]: basedate set to 2026-01-11 Jan 23 17:56:33.935778 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:56:33.916901 ntpd[1991]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:33.917374 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:33.917944 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:33.919196 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:33.919249 ntpd[1991]: Listen normally on 3 eth0 172.31.24.80:123 Jan 23 17:56:33.919813 ntpd[1991]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:33.919876 ntpd[1991]: bind(21) AF_INET6 [fe80::4c1:deff:fe15:e751%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:56:33.919959 ntpd[1991]: unable to create socket on eth0 (5) for [fe80::4c1:deff:fe15:e751%2]:123 Jan 23 17:56:33.956663 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 17:56:33.948329 systemd-coredump[2055]: Process 1991 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 17:56:33.984974 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:56:33.991930 extend-filesystems[2033]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 17:56:33.991930 extend-filesystems[2033]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 17:56:33.991930 extend-filesystems[2033]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 17:56:34.041496 extend-filesystems[1989]: Resized filesystem in /dev/nvme0n1p9 Jan 23 17:56:34.044057 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:56:34.076029 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:56:34.086609 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 17:56:34.102199 systemd[1]: Started systemd-coredump@0-2055-0.service - Process Core Dump (PID 2055/UID 0). Jan 23 17:56:34.128176 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:56:34.134408 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:56:34.146262 bash[2082]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:56:34.147634 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:56:34.159656 systemd[1]: Starting sshkeys.service... Jan 23 17:56:34.279714 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:56:34.293704 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:56:34.490546 systemd-networkd[1840]: eth0: Gained IPv6LL Jan 23 17:56:34.500831 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:56:34.510558 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:56:34.519804 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 17:56:34.530663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:34.542985 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:56:34.739989 coreos-metadata[2129]: Jan 23 17:56:34.739 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:56:34.760335 coreos-metadata[2129]: Jan 23 17:56:34.760 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 17:56:34.761690 coreos-metadata[2129]: Jan 23 17:56:34.761 INFO Fetch successful Jan 23 17:56:34.761805 coreos-metadata[2129]: Jan 23 17:56:34.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:56:34.771571 coreos-metadata[2129]: Jan 23 17:56:34.771 INFO Fetch successful Jan 23 17:56:34.777581 unknown[2129]: wrote ssh authorized keys file for user: core Jan 23 17:56:34.818277 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:56:34.852229 containerd[2025]: time="2026-01-23T17:56:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:56:34.865395 containerd[2025]: time="2026-01-23T17:56:34.860659646Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:56:34.913946 update-ssh-keys[2186]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:56:34.916950 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:56:34.936580 systemd[1]: Finished sshkeys.service. Jan 23 17:56:35.015119 locksmithd[2059]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:56:35.019555 containerd[2025]: time="2026-01-23T17:56:35.019474571Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.096µs" Jan 23 17:56:35.019555 containerd[2025]: time="2026-01-23T17:56:35.019537691Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:56:35.019727 containerd[2025]: time="2026-01-23T17:56:35.019576631Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:56:35.019932 containerd[2025]: time="2026-01-23T17:56:35.019878875Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:56:35.020026 containerd[2025]: time="2026-01-23T17:56:35.019929755Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:56:35.020026 containerd[2025]: time="2026-01-23T17:56:35.019986611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:56:35.021106 containerd[2025]: time="2026-01-23T17:56:35.020128403Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:56:35.021106 containerd[2025]: time="2026-01-23T17:56:35.020168567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:56:35.023652 containerd[2025]: time="2026-01-23T17:56:35.023578403Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:56:35.023652 containerd[2025]: time="2026-01-23T17:56:35.023641247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:56:35.023839 containerd[2025]: time="2026-01-23T17:56:35.023675195Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:56:35.023839 containerd[2025]: time="2026-01-23T17:56:35.023717831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:56:35.023982 containerd[2025]: time="2026-01-23T17:56:35.023940851Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:56:35.028373 amazon-ssm-agent[2170]: Initializing new seelog logger Jan 23 17:56:35.030394 amazon-ssm-agent[2170]: New Seelog Logger Creation Complete Jan 23 17:56:35.030394 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.030394 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.032890 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 processing appconfig overrides Jan 23 17:56:35.036691 containerd[2025]: time="2026-01-23T17:56:35.033636743Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:56:35.036691 containerd[2025]: time="2026-01-23T17:56:35.033766535Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:56:35.036691 containerd[2025]: time="2026-01-23T17:56:35.033813791Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:56:35.036691 containerd[2025]: time="2026-01-23T17:56:35.033921875Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:56:35.036691 containerd[2025]: time="2026-01-23T17:56:35.034571687Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:56:35.036691 containerd[2025]: time="2026-01-23T17:56:35.034791179Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:56:35.043174 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.043174 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.043174 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 processing appconfig overrides Jan 23 17:56:35.044394 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.0386 INFO Proxy environment variables: Jan 23 17:56:35.048537 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.048537 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.049068 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 processing appconfig overrides Jan 23 17:56:35.056853 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 17:56:35.065350 containerd[2025]: time="2026-01-23T17:56:35.065180195Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:56:35.065350 containerd[2025]: time="2026-01-23T17:56:35.065331347Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065369759Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065411567Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065442995Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065471903Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065502443Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065532227Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065568491Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065595899Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065620379Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065651123Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:56:35.065886 containerd[2025]: time="2026-01-23T17:56:35.065857775Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.065902487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.065935247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.065964251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.065991467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066019127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066046511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066076475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066134759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066165071Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066191399Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066554231Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066586175Z" level=info msg="Start snapshots syncer" Jan 23 17:56:35.067527 containerd[2025]: time="2026-01-23T17:56:35.066644723Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:56:35.074192 containerd[2025]: time="2026-01-23T17:56:35.067063475Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:56:35.074192 containerd[2025]: time="2026-01-23T17:56:35.072735239Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:56:35.074485 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.074485 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:35.074485 amazon-ssm-agent[2170]: 2026/01/23 17:56:35 processing appconfig overrides Jan 23 17:56:35.076373 containerd[2025]: time="2026-01-23T17:56:35.076142459Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:56:35.080329 containerd[2025]: time="2026-01-23T17:56:35.079668287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:56:35.080329 containerd[2025]: time="2026-01-23T17:56:35.079782923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:56:35.080568 containerd[2025]: time="2026-01-23T17:56:35.080530427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:56:35.085982 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.082173635Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086428331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086467871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086498279Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086557823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086587499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086619935Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086691191Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086725739Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086749151Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086777243Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086799119Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086823575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:56:35.089835 containerd[2025]: time="2026-01-23T17:56:35.086850419Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:56:35.090524 containerd[2025]: time="2026-01-23T17:56:35.087041579Z" level=info msg="runtime interface created" Jan 23 17:56:35.090524 containerd[2025]: time="2026-01-23T17:56:35.087061127Z" level=info msg="created NRI interface" Jan 23 17:56:35.090524 containerd[2025]: time="2026-01-23T17:56:35.088924655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:56:35.090524 containerd[2025]: time="2026-01-23T17:56:35.089037755Z" level=info msg="Connect containerd service" Jan 23 17:56:35.093476 containerd[2025]: time="2026-01-23T17:56:35.089302259Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:56:35.106880 containerd[2025]: time="2026-01-23T17:56:35.104128715Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:56:35.108357 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2053 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 17:56:35.119595 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 17:56:35.157469 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.0387 INFO http_proxy: Jan 23 17:56:35.269477 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.0387 INFO no_proxy: Jan 23 17:56:35.350935 systemd-coredump[2087]: Process 1991 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1991: #0 0x0000aaaae1180b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaae112fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaae1130240 n/a (ntpd + 0x10240) #3 0x0000aaaae112be14 n/a (ntpd + 0xbe14) #4 0x0000aaaae112d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaae1135a38 n/a (ntpd + 0x15a38) #6 0x0000aaaae112738c n/a (ntpd + 0x738c) #7 0x0000ffff90872034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff90872118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaae11273f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 23 17:56:35.358929 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 17:56:35.359447 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 17:56:35.378713 systemd[1]: systemd-coredump@0-2055-0.service: Deactivated successfully. Jan 23 17:56:35.380058 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.0387 INFO https_proxy: Jan 23 17:56:35.479844 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.0389 INFO Checking if agent identity type OnPrem can be assumed Jan 23 17:56:35.507429 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:35.511311 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:56:35.589191 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.0389 INFO Checking if agent identity type EC2 can be assumed Jan 23 17:56:35.608338 ntpd[2226]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: ---------------------------------------------------- Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: corporation. Support and training for ntp-4 are Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: available at https://www.nwtime.org/support Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: ---------------------------------------------------- Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: proto: precision = 0.096 usec (-23) Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: basedate set to 2026-01-11 Jan 23 17:56:35.610632 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:35.608444 ntpd[2226]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:35.608463 ntpd[2226]: ---------------------------------------------------- Jan 23 17:56:35.608480 ntpd[2226]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:35.608495 ntpd[2226]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:35.608511 ntpd[2226]: corporation. Support and training for ntp-4 are Jan 23 17:56:35.608543 ntpd[2226]: available at https://www.nwtime.org/support Jan 23 17:56:35.608561 ntpd[2226]: ---------------------------------------------------- Jan 23 17:56:35.609581 ntpd[2226]: proto: precision = 0.096 usec (-23) Jan 23 17:56:35.609923 ntpd[2226]: basedate set to 2026-01-11 Jan 23 17:56:35.609946 ntpd[2226]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:35.613072 ntpd[2226]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:35.617184 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:35.617184 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:35.617184 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:35.617184 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Listen normally on 3 eth0 172.31.24.80:123 Jan 23 17:56:35.617184 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:35.617184 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Listen normally on 5 eth0 [fe80::4c1:deff:fe15:e751%2]:123 Jan 23 17:56:35.617184 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: Listening on routing socket on fd #22 for interface updates Jan 23 17:56:35.613202 ntpd[2226]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:35.613485 ntpd[2226]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:35.613526 ntpd[2226]: Listen normally on 3 eth0 172.31.24.80:123 Jan 23 17:56:35.614276 ntpd[2226]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:35.614333 ntpd[2226]: Listen normally on 5 eth0 [fe80::4c1:deff:fe15:e751%2]:123 Jan 23 17:56:35.614374 ntpd[2226]: Listening on routing socket on fd #22 for interface updates Jan 23 17:56:35.626512 ntpd[2226]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:35.627900 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:35.628023 ntpd[2226]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:35.628183 ntpd[2226]: 23 Jan 17:56:35 ntpd[2226]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:35.681871 containerd[2025]: time="2026-01-23T17:56:35.681750266Z" level=info msg="Start subscribing containerd event" Jan 23 17:56:35.682131 containerd[2025]: time="2026-01-23T17:56:35.682075070Z" level=info msg="Start recovering state" Jan 23 17:56:35.683510 containerd[2025]: time="2026-01-23T17:56:35.682530998Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:56:35.683510 containerd[2025]: time="2026-01-23T17:56:35.682650938Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:56:35.683957 containerd[2025]: time="2026-01-23T17:56:35.683918186Z" level=info msg="Start event monitor" Jan 23 17:56:35.684146 containerd[2025]: time="2026-01-23T17:56:35.684063494Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:56:35.685539 containerd[2025]: time="2026-01-23T17:56:35.685490654Z" level=info msg="Start streaming server" Jan 23 17:56:35.685671 containerd[2025]: time="2026-01-23T17:56:35.685646690Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:56:35.685786 containerd[2025]: time="2026-01-23T17:56:35.685747058Z" level=info msg="runtime interface starting up..." Jan 23 17:56:35.685901 containerd[2025]: time="2026-01-23T17:56:35.685866350Z" level=info msg="starting plugins..." Jan 23 17:56:35.686019 containerd[2025]: time="2026-01-23T17:56:35.685993406Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:56:35.691626 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.4937 INFO Agent will take identity from EC2 Jan 23 17:56:35.696254 containerd[2025]: time="2026-01-23T17:56:35.696206978Z" level=info msg="containerd successfully booted in 0.853698s" Jan 23 17:56:35.696377 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:56:35.787977 polkitd[2207]: Started polkitd version 126 Jan 23 17:56:35.790865 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5207 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 17:56:35.824659 polkitd[2207]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 17:56:35.829037 polkitd[2207]: Loading rules from directory /run/polkit-1/rules.d Jan 23 17:56:35.829165 polkitd[2207]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:56:35.829821 polkitd[2207]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 17:56:35.829905 polkitd[2207]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:56:35.829993 polkitd[2207]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 17:56:35.835407 polkitd[2207]: Finished loading, compiling and executing 2 rules Jan 23 17:56:35.838636 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 17:56:35.844320 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 17:56:35.849396 polkitd[2207]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 17:56:35.889872 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5207 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 17:56:35.900598 systemd-hostnamed[2053]: Hostname set to (transient) Jan 23 17:56:35.900761 systemd-resolved[1841]: System hostname changed to 'ip-172-31-24-80'. Jan 23 17:56:35.989222 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5207 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 17:56:36.090104 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5207 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 17:56:36.189831 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5208 INFO [Registrar] Starting registrar module Jan 23 17:56:36.289623 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5309 INFO [EC2Identity] Checking disk for registration info Jan 23 17:56:36.324617 tar[2005]: linux-arm64/README.md Jan 23 17:56:36.363661 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:56:36.390283 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5310 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 17:56:36.492030 amazon-ssm-agent[2170]: 2026-01-23 17:56:35.5310 INFO [EC2Identity] Generating registration keypair Jan 23 17:56:36.594644 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.5662 INFO [EC2Identity] Checking write access before registering Jan 23 17:56:36.616857 amazon-ssm-agent[2170]: 2026/01/23 17:56:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:36.618574 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:36.618574 amazon-ssm-agent[2170]: 2026/01/23 17:56:36 processing appconfig overrides Jan 23 17:56:36.652513 sshd_keygen[2039]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:56:36.657339 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.5669 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 17:56:36.657972 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.6166 INFO [EC2Identity] EC2 registration was successful. Jan 23 17:56:36.658068 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.6166 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 17:56:36.658068 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.6167 INFO [CredentialRefresher] credentialRefresher has started Jan 23 17:56:36.658068 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.6167 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 17:56:36.658068 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.6552 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 17:56:36.659265 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.6572 INFO [CredentialRefresher] Credentials ready Jan 23 17:56:36.694250 amazon-ssm-agent[2170]: 2026-01-23 17:56:36.6582 INFO [CredentialRefresher] Next credential rotation will be in 29.9999501964 minutes Jan 23 17:56:36.703937 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:56:36.710352 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:56:36.716827 systemd[1]: Started sshd@0-172.31.24.80:22-68.220.241.50:60458.service - OpenSSH per-connection server daemon (68.220.241.50:60458). Jan 23 17:56:36.746262 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:56:36.746934 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:56:36.757547 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:56:36.801818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:56:36.813627 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:56:36.821035 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 17:56:36.829803 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:56:36.958927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:36.965489 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:56:36.975190 systemd[1]: Startup finished in 3.782s (kernel) + 9.869s (initrd) + 9.471s (userspace) = 23.124s. Jan 23 17:56:36.977486 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:37.321625 sshd[2252]: Accepted publickey for core from 68.220.241.50 port 60458 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:37.325221 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:37.341255 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:56:37.345507 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:56:37.365215 systemd-logind[2001]: New session 1 of user core. Jan 23 17:56:37.387926 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:56:37.397354 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:56:37.417595 (systemd)[2278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:56:37.424280 systemd-logind[2001]: New session c1 of user core. Jan 23 17:56:37.710225 amazon-ssm-agent[2170]: 2026-01-23 17:56:37.7089 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 17:56:37.728716 systemd[2278]: Queued start job for default target default.target. Jan 23 17:56:37.734368 systemd[2278]: Created slice app.slice - User Application Slice. Jan 23 17:56:37.734433 systemd[2278]: Reached target paths.target - Paths. Jan 23 17:56:37.736304 systemd[2278]: Reached target timers.target - Timers. Jan 23 17:56:37.743279 systemd[2278]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:56:37.810446 amazon-ssm-agent[2170]: 2026-01-23 17:56:37.7162 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2287) started Jan 23 17:56:37.835162 systemd[2278]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:56:37.836971 systemd[2278]: Reached target sockets.target - Sockets. Jan 23 17:56:37.837261 systemd[2278]: Reached target basic.target - Basic System. Jan 23 17:56:37.837402 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:56:37.839474 systemd[2278]: Reached target default.target - Main User Target. Jan 23 17:56:37.839619 systemd[2278]: Startup finished in 401ms. Jan 23 17:56:37.845520 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:56:37.890305 kubelet[2267]: E0123 17:56:37.890240 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:37.895280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:37.895618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:37.898270 systemd[1]: kubelet.service: Consumed 1.356s CPU time, 249.2M memory peak. Jan 23 17:56:37.910783 amazon-ssm-agent[2170]: 2026-01-23 17:56:37.7162 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 17:56:38.223492 systemd[1]: Started sshd@1-172.31.24.80:22-68.220.241.50:60474.service - OpenSSH per-connection server daemon (68.220.241.50:60474). Jan 23 17:56:38.746445 sshd[2304]: Accepted publickey for core from 68.220.241.50 port 60474 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:38.748808 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:38.756695 systemd-logind[2001]: New session 2 of user core. Jan 23 17:56:38.769303 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:56:39.103318 sshd[2307]: Connection closed by 68.220.241.50 port 60474 Jan 23 17:56:39.104342 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:39.112057 systemd[1]: sshd@1-172.31.24.80:22-68.220.241.50:60474.service: Deactivated successfully. Jan 23 17:56:39.115337 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:56:39.117406 systemd-logind[2001]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:56:39.120205 systemd-logind[2001]: Removed session 2. Jan 23 17:56:39.195778 systemd[1]: Started sshd@2-172.31.24.80:22-68.220.241.50:60490.service - OpenSSH per-connection server daemon (68.220.241.50:60490). Jan 23 17:56:39.717146 sshd[2313]: Accepted publickey for core from 68.220.241.50 port 60490 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:39.719061 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:39.726779 systemd-logind[2001]: New session 3 of user core. Jan 23 17:56:39.737316 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:56:40.068865 sshd[2316]: Connection closed by 68.220.241.50 port 60490 Jan 23 17:56:40.069901 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:40.077869 systemd-logind[2001]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:56:40.078863 systemd[1]: sshd@2-172.31.24.80:22-68.220.241.50:60490.service: Deactivated successfully. Jan 23 17:56:40.083221 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:56:40.087815 systemd-logind[2001]: Removed session 3. Jan 23 17:56:40.167228 systemd[1]: Started sshd@3-172.31.24.80:22-68.220.241.50:60502.service - OpenSSH per-connection server daemon (68.220.241.50:60502). Jan 23 17:56:40.692175 sshd[2322]: Accepted publickey for core from 68.220.241.50 port 60502 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:40.694041 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:40.703127 systemd-logind[2001]: New session 4 of user core. Jan 23 17:56:40.709326 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:56:41.044246 sshd[2325]: Connection closed by 68.220.241.50 port 60502 Jan 23 17:56:41.044977 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:41.053293 systemd-logind[2001]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:56:41.054214 systemd[1]: sshd@3-172.31.24.80:22-68.220.241.50:60502.service: Deactivated successfully. Jan 23 17:56:41.059041 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:56:41.063044 systemd-logind[2001]: Removed session 4. Jan 23 17:56:41.137066 systemd[1]: Started sshd@4-172.31.24.80:22-68.220.241.50:60508.service - OpenSSH per-connection server daemon (68.220.241.50:60508). Jan 23 17:56:41.650803 sshd[2331]: Accepted publickey for core from 68.220.241.50 port 60508 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:41.653070 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:41.662184 systemd-logind[2001]: New session 5 of user core. Jan 23 17:56:41.669357 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:56:41.963800 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:56:41.964730 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:41.987972 sudo[2335]: pam_unix(sudo:session): session closed for user root Jan 23 17:56:42.065006 sshd[2334]: Connection closed by 68.220.241.50 port 60508 Jan 23 17:56:42.066128 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:42.073008 systemd-logind[2001]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:56:42.073292 systemd[1]: sshd@4-172.31.24.80:22-68.220.241.50:60508.service: Deactivated successfully. Jan 23 17:56:42.076422 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:56:42.081990 systemd-logind[2001]: Removed session 5. Jan 23 17:56:42.155885 systemd[1]: Started sshd@5-172.31.24.80:22-68.220.241.50:58482.service - OpenSSH per-connection server daemon (68.220.241.50:58482). Jan 23 17:56:43.006937 systemd-resolved[1841]: Clock change detected. Flushing caches. Jan 23 17:56:43.070934 sshd[2341]: Accepted publickey for core from 68.220.241.50 port 58482 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:43.072161 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:43.080964 systemd-logind[2001]: New session 6 of user core. Jan 23 17:56:43.088161 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:56:43.346776 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:56:43.347412 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:43.355002 sudo[2346]: pam_unix(sudo:session): session closed for user root Jan 23 17:56:43.364684 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:56:43.365313 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:43.381079 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:56:43.443648 augenrules[2368]: No rules Jan 23 17:56:43.446346 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:56:43.446904 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:56:43.450163 sudo[2345]: pam_unix(sudo:session): session closed for user root Jan 23 17:56:43.526897 sshd[2344]: Connection closed by 68.220.241.50 port 58482 Jan 23 17:56:43.527349 sshd-session[2341]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:43.535703 systemd-logind[2001]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:56:43.536317 systemd[1]: sshd@5-172.31.24.80:22-68.220.241.50:58482.service: Deactivated successfully. Jan 23 17:56:43.539404 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:56:43.542899 systemd-logind[2001]: Removed session 6. Jan 23 17:56:43.634269 systemd[1]: Started sshd@6-172.31.24.80:22-68.220.241.50:58492.service - OpenSSH per-connection server daemon (68.220.241.50:58492). Jan 23 17:56:44.187562 sshd[2377]: Accepted publickey for core from 68.220.241.50 port 58492 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:44.189784 sshd-session[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:44.196988 systemd-logind[2001]: New session 7 of user core. Jan 23 17:56:44.206138 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:56:44.483902 sudo[2381]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:56:44.485086 sudo[2381]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:45.499819 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:56:45.516410 (dockerd)[2399]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:56:46.113591 dockerd[2399]: time="2026-01-23T17:56:46.113495179Z" level=info msg="Starting up" Jan 23 17:56:46.115682 dockerd[2399]: time="2026-01-23T17:56:46.115630699Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:56:46.139436 dockerd[2399]: time="2026-01-23T17:56:46.139361767Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:56:46.194674 systemd[1]: var-lib-docker-metacopy\x2dcheck2781282089-merged.mount: Deactivated successfully. Jan 23 17:56:46.213919 dockerd[2399]: time="2026-01-23T17:56:46.213613039Z" level=info msg="Loading containers: start." Jan 23 17:56:46.228907 kernel: Initializing XFRM netlink socket Jan 23 17:56:46.582395 (udev-worker)[2421]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:56:46.660509 systemd-networkd[1840]: docker0: Link UP Jan 23 17:56:46.673604 dockerd[2399]: time="2026-01-23T17:56:46.673533322Z" level=info msg="Loading containers: done." Jan 23 17:56:46.698578 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4265933197-merged.mount: Deactivated successfully. Jan 23 17:56:46.713984 dockerd[2399]: time="2026-01-23T17:56:46.713791906Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:56:46.714540 dockerd[2399]: time="2026-01-23T17:56:46.714337510Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:56:46.714671 dockerd[2399]: time="2026-01-23T17:56:46.714646954Z" level=info msg="Initializing buildkit" Jan 23 17:56:46.766247 dockerd[2399]: time="2026-01-23T17:56:46.766193566Z" level=info msg="Completed buildkit initialization" Jan 23 17:56:46.784272 dockerd[2399]: time="2026-01-23T17:56:46.784204054Z" level=info msg="Daemon has completed initialization" Jan 23 17:56:46.784712 dockerd[2399]: time="2026-01-23T17:56:46.784496746Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:56:46.784866 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:56:47.994771 containerd[2025]: time="2026-01-23T17:56:47.994196472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 17:56:48.507365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:48.513226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:48.592801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959726031.mount: Deactivated successfully. Jan 23 17:56:48.960413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:48.981622 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:49.089401 kubelet[2630]: E0123 17:56:49.089296 2630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:49.105519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:49.105843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:49.106461 systemd[1]: kubelet.service: Consumed 332ms CPU time, 107.4M memory peak. Jan 23 17:56:50.117687 containerd[2025]: time="2026-01-23T17:56:50.117601019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:50.121586 containerd[2025]: time="2026-01-23T17:56:50.121085975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 23 17:56:50.123889 containerd[2025]: time="2026-01-23T17:56:50.123806567Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:50.130592 containerd[2025]: time="2026-01-23T17:56:50.130541411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:50.131937 containerd[2025]: time="2026-01-23T17:56:50.131830955Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.137577171s" Jan 23 17:56:50.133102 containerd[2025]: time="2026-01-23T17:56:50.133030691Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 17:56:50.135365 containerd[2025]: time="2026-01-23T17:56:50.135015923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 17:56:51.424230 containerd[2025]: time="2026-01-23T17:56:51.424152793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:51.426354 containerd[2025]: time="2026-01-23T17:56:51.426077821Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 23 17:56:51.427544 containerd[2025]: time="2026-01-23T17:56:51.427487413Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:51.433349 containerd[2025]: time="2026-01-23T17:56:51.433281481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:51.435275 containerd[2025]: time="2026-01-23T17:56:51.435224977Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.300144614s" Jan 23 17:56:51.435556 containerd[2025]: time="2026-01-23T17:56:51.435385369Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 17:56:51.436157 containerd[2025]: time="2026-01-23T17:56:51.436021273Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 17:56:52.490695 containerd[2025]: time="2026-01-23T17:56:52.490618995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:52.493031 containerd[2025]: time="2026-01-23T17:56:52.492977235Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 23 17:56:52.494600 containerd[2025]: time="2026-01-23T17:56:52.494545263Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:52.501456 containerd[2025]: time="2026-01-23T17:56:52.501016707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:52.503071 containerd[2025]: time="2026-01-23T17:56:52.503025471Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.066704822s" Jan 23 17:56:52.503231 containerd[2025]: time="2026-01-23T17:56:52.503203347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 17:56:52.504153 containerd[2025]: time="2026-01-23T17:56:52.504114723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 17:56:53.757743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633543950.mount: Deactivated successfully. Jan 23 17:56:54.176477 containerd[2025]: time="2026-01-23T17:56:54.175816167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:54.177759 containerd[2025]: time="2026-01-23T17:56:54.177112011Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 23 17:56:54.178813 containerd[2025]: time="2026-01-23T17:56:54.178742547Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:54.183479 containerd[2025]: time="2026-01-23T17:56:54.183426939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:54.184614 containerd[2025]: time="2026-01-23T17:56:54.184553607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.679800448s" Jan 23 17:56:54.184614 containerd[2025]: time="2026-01-23T17:56:54.184607571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 17:56:54.185627 containerd[2025]: time="2026-01-23T17:56:54.185577315Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 17:56:54.761329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974454010.mount: Deactivated successfully. Jan 23 17:56:55.938473 containerd[2025]: time="2026-01-23T17:56:55.938381708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.942567 containerd[2025]: time="2026-01-23T17:56:55.942496172Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 23 17:56:55.944745 containerd[2025]: time="2026-01-23T17:56:55.944687552Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.950918 containerd[2025]: time="2026-01-23T17:56:55.950505584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.953078 containerd[2025]: time="2026-01-23T17:56:55.952438160Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.766802801s" Jan 23 17:56:55.953078 containerd[2025]: time="2026-01-23T17:56:55.952496276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 17:56:55.953436 containerd[2025]: time="2026-01-23T17:56:55.953374976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 17:56:56.428224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1641845556.mount: Deactivated successfully. Jan 23 17:56:56.441211 containerd[2025]: time="2026-01-23T17:56:56.439991562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:56.441939 containerd[2025]: time="2026-01-23T17:56:56.441904518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 23 17:56:56.444435 containerd[2025]: time="2026-01-23T17:56:56.444392658Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:56.448814 containerd[2025]: time="2026-01-23T17:56:56.448763238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:56.450213 containerd[2025]: time="2026-01-23T17:56:56.450149850Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 496.717178ms" Jan 23 17:56:56.450213 containerd[2025]: time="2026-01-23T17:56:56.450207270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 17:56:56.450994 containerd[2025]: time="2026-01-23T17:56:56.450930558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 17:56:57.037621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243753496.mount: Deactivated successfully. Jan 23 17:56:59.257464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:56:59.261655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:59.653722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:59.664429 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:59.767607 kubelet[2814]: E0123 17:56:59.767528 2814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:59.779330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:59.779630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:59.780217 systemd[1]: kubelet.service: Consumed 332ms CPU time, 108.4M memory peak. Jan 23 17:57:00.768114 containerd[2025]: time="2026-01-23T17:57:00.768032316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:00.770622 containerd[2025]: time="2026-01-23T17:57:00.770087124Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 23 17:57:00.772817 containerd[2025]: time="2026-01-23T17:57:00.772758312Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:00.778697 containerd[2025]: time="2026-01-23T17:57:00.778632036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:00.780972 containerd[2025]: time="2026-01-23T17:57:00.780916140Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.329924874s" Jan 23 17:57:00.781063 containerd[2025]: time="2026-01-23T17:57:00.780969804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 17:57:06.311318 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 17:57:10.007590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:57:10.013198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:10.364191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:10.381585 (kubelet)[2855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:10.461678 kubelet[2855]: E0123 17:57:10.461595 2855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:10.467683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:10.468066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:10.468921 systemd[1]: kubelet.service: Consumed 310ms CPU time, 106.7M memory peak. Jan 23 17:57:11.046776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:11.047744 systemd[1]: kubelet.service: Consumed 310ms CPU time, 106.7M memory peak. Jan 23 17:57:11.051810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:11.109196 systemd[1]: Reload requested from client PID 2869 ('systemctl') (unit session-7.scope)... Jan 23 17:57:11.109387 systemd[1]: Reloading... Jan 23 17:57:11.351922 zram_generator::config[2919]: No configuration found. Jan 23 17:57:11.796197 systemd[1]: Reloading finished in 686 ms. Jan 23 17:57:11.888825 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:57:11.889023 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:57:11.889552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:11.889636 systemd[1]: kubelet.service: Consumed 224ms CPU time, 94.9M memory peak. Jan 23 17:57:11.894425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:12.223754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:12.239396 (kubelet)[2976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:57:12.313539 kubelet[2976]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:57:12.314058 kubelet[2976]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:12.314297 kubelet[2976]: I0123 17:57:12.314252 2976 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:57:13.384014 kubelet[2976]: I0123 17:57:13.383692 2976 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:57:13.384014 kubelet[2976]: I0123 17:57:13.383755 2976 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:57:13.386305 kubelet[2976]: I0123 17:57:13.386255 2976 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:57:13.386305 kubelet[2976]: I0123 17:57:13.386295 2976 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:57:13.386736 kubelet[2976]: I0123 17:57:13.386692 2976 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:57:13.399459 kubelet[2976]: E0123 17:57:13.399345 2976 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:57:13.402136 kubelet[2976]: I0123 17:57:13.401684 2976 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:57:13.409034 kubelet[2976]: I0123 17:57:13.408973 2976 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:57:13.414833 kubelet[2976]: I0123 17:57:13.414800 2976 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:57:13.415898 kubelet[2976]: I0123 17:57:13.415464 2976 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:57:13.415898 kubelet[2976]: I0123 17:57:13.415510 2976 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-80","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:57:13.415898 kubelet[2976]: I0123 17:57:13.415735 2976 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:57:13.415898 kubelet[2976]: I0123 17:57:13.415752 2976 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:57:13.416392 kubelet[2976]: I0123 17:57:13.416369 2976 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:57:13.423472 kubelet[2976]: I0123 17:57:13.423421 2976 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:13.426248 kubelet[2976]: I0123 17:57:13.426203 2976 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:57:13.427738 kubelet[2976]: I0123 17:57:13.426409 2976 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:57:13.427738 kubelet[2976]: I0123 17:57:13.426466 2976 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:57:13.427738 kubelet[2976]: I0123 17:57:13.426489 2976 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:57:13.429827 kubelet[2976]: E0123 17:57:13.429757 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-80&limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:57:13.430944 kubelet[2976]: E0123 17:57:13.430844 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:57:13.431094 kubelet[2976]: I0123 17:57:13.431053 2976 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:57:13.432167 kubelet[2976]: I0123 17:57:13.432125 2976 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:57:13.432279 kubelet[2976]: I0123 17:57:13.432189 2976 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:57:13.432279 kubelet[2976]: W0123 17:57:13.432254 2976 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:57:13.436975 kubelet[2976]: I0123 17:57:13.436792 2976 server.go:1262] "Started kubelet" Jan 23 17:57:13.439507 kubelet[2976]: I0123 17:57:13.439450 2976 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:57:13.444940 kubelet[2976]: I0123 17:57:13.444006 2976 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:57:13.445846 kubelet[2976]: I0123 17:57:13.445813 2976 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:57:13.451376 kubelet[2976]: I0123 17:57:13.451295 2976 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:57:13.451857 kubelet[2976]: I0123 17:57:13.451797 2976 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:57:13.452045 kubelet[2976]: I0123 17:57:13.452019 2976 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:57:13.452415 kubelet[2976]: I0123 17:57:13.452390 2976 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:57:13.457120 kubelet[2976]: I0123 17:57:13.457067 2976 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:57:13.457531 kubelet[2976]: E0123 17:57:13.457489 2976 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-80\" not found" Jan 23 17:57:13.458793 kubelet[2976]: I0123 17:57:13.458754 2976 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:57:13.464968 kubelet[2976]: I0123 17:57:13.464896 2976 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:57:13.465086 kubelet[2976]: I0123 17:57:13.465024 2976 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:57:13.465784 kubelet[2976]: E0123 17:57:13.465715 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:57:13.469770 kubelet[2976]: E0123 17:57:13.465817 2976 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.80:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-80.188d6ddf0fcda2c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-80,UID:ip-172-31-24-80,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-80,},FirstTimestamp:2026-01-23 17:57:13.436742339 +0000 UTC m=+1.190960203,LastTimestamp:2026-01-23 17:57:13.436742339 +0000 UTC m=+1.190960203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-80,}" Jan 23 17:57:13.469770 kubelet[2976]: E0123 17:57:13.469040 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" interval="200ms" Jan 23 17:57:13.470995 kubelet[2976]: I0123 17:57:13.470809 2976 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:57:13.473616 kubelet[2976]: I0123 17:57:13.473583 2976 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:57:13.473830 kubelet[2976]: I0123 17:57:13.473807 2976 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:57:13.485924 kubelet[2976]: I0123 17:57:13.484025 2976 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:57:13.485924 kubelet[2976]: I0123 17:57:13.484069 2976 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:57:13.485924 kubelet[2976]: I0123 17:57:13.484108 2976 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:57:13.485924 kubelet[2976]: E0123 17:57:13.484172 2976 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:57:13.490510 kubelet[2976]: E0123 17:57:13.490468 2976 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:57:13.494594 kubelet[2976]: E0123 17:57:13.494536 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:57:13.527161 kubelet[2976]: I0123 17:57:13.526739 2976 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:57:13.527161 kubelet[2976]: I0123 17:57:13.526771 2976 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:57:13.527161 kubelet[2976]: I0123 17:57:13.526803 2976 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:13.531206 kubelet[2976]: I0123 17:57:13.531177 2976 policy_none.go:49] "None policy: Start" Jan 23 17:57:13.531362 kubelet[2976]: I0123 17:57:13.531344 2976 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:57:13.531481 kubelet[2976]: I0123 17:57:13.531462 2976 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:57:13.535216 kubelet[2976]: I0123 17:57:13.535190 2976 policy_none.go:47] "Start" Jan 23 17:57:13.543777 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:57:13.557964 kubelet[2976]: E0123 17:57:13.557920 2976 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-80\" not found" Jan 23 17:57:13.562758 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:57:13.570599 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:57:13.584473 kubelet[2976]: E0123 17:57:13.584415 2976 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 17:57:13.585659 kubelet[2976]: E0123 17:57:13.585515 2976 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:57:13.586310 kubelet[2976]: I0123 17:57:13.586287 2976 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:57:13.588907 kubelet[2976]: I0123 17:57:13.587826 2976 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:57:13.588907 kubelet[2976]: I0123 17:57:13.588363 2976 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:57:13.591963 kubelet[2976]: E0123 17:57:13.591824 2976 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:57:13.592097 kubelet[2976]: E0123 17:57:13.592046 2976 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-80\" not found" Jan 23 17:57:13.671678 kubelet[2976]: E0123 17:57:13.669998 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" interval="400ms" Jan 23 17:57:13.690308 kubelet[2976]: I0123 17:57:13.690243 2976 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-80" Jan 23 17:57:13.691103 kubelet[2976]: E0123 17:57:13.690852 2976 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Jan 23 17:57:13.807514 systemd[1]: Created slice kubepods-burstable-poda96c89cc58a3ca02fd4d415dff4d970a.slice - libcontainer container kubepods-burstable-poda96c89cc58a3ca02fd4d415dff4d970a.slice. Jan 23 17:57:13.828194 kubelet[2976]: E0123 17:57:13.828085 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:13.834392 systemd[1]: Created slice kubepods-burstable-pod40874e87cdc7f0c6ddd99c42986ce203.slice - libcontainer container kubepods-burstable-pod40874e87cdc7f0c6ddd99c42986ce203.slice. Jan 23 17:57:13.838867 kubelet[2976]: E0123 17:57:13.838811 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:13.846244 systemd[1]: Created slice kubepods-burstable-podac0f2d78a7307bd3c0a169bc9aa73f7d.slice - libcontainer container kubepods-burstable-podac0f2d78a7307bd3c0a169bc9aa73f7d.slice. Jan 23 17:57:13.849753 kubelet[2976]: E0123 17:57:13.849420 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:13.867771 kubelet[2976]: I0123 17:57:13.867727 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:13.868047 kubelet[2976]: I0123 17:57:13.868020 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac0f2d78a7307bd3c0a169bc9aa73f7d-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-80\" (UID: \"ac0f2d78a7307bd3c0a169bc9aa73f7d\") " pod="kube-system/kube-scheduler-ip-172-31-24-80" Jan 23 17:57:13.868194 kubelet[2976]: I0123 17:57:13.868169 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a96c89cc58a3ca02fd4d415dff4d970a-ca-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a96c89cc58a3ca02fd4d415dff4d970a\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:13.868331 kubelet[2976]: I0123 17:57:13.868308 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a96c89cc58a3ca02fd4d415dff4d970a-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a96c89cc58a3ca02fd4d415dff4d970a\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:13.868464 kubelet[2976]: I0123 17:57:13.868436 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a96c89cc58a3ca02fd4d415dff4d970a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a96c89cc58a3ca02fd4d415dff4d970a\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:13.868606 kubelet[2976]: I0123 17:57:13.868582 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:13.868740 kubelet[2976]: I0123 17:57:13.868713 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:13.868905 kubelet[2976]: I0123 17:57:13.868860 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:13.869044 kubelet[2976]: I0123 17:57:13.869021 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:13.894403 kubelet[2976]: I0123 17:57:13.894343 2976 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-80" Jan 23 17:57:13.895048 kubelet[2976]: E0123 17:57:13.894987 2976 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Jan 23 17:57:14.071343 kubelet[2976]: E0123 17:57:14.071261 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" interval="800ms" Jan 23 17:57:14.136149 containerd[2025]: time="2026-01-23T17:57:14.135050650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-80,Uid:a96c89cc58a3ca02fd4d415dff4d970a,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:14.143862 containerd[2025]: time="2026-01-23T17:57:14.143801854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-80,Uid:40874e87cdc7f0c6ddd99c42986ce203,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:14.154821 containerd[2025]: time="2026-01-23T17:57:14.154745782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-80,Uid:ac0f2d78a7307bd3c0a169bc9aa73f7d,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:14.253995 kubelet[2976]: E0123 17:57:14.253855 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-80&limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:57:14.281164 kubelet[2976]: E0123 17:57:14.281098 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:57:14.297727 kubelet[2976]: I0123 17:57:14.297596 2976 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-80" Jan 23 17:57:14.298452 kubelet[2976]: E0123 17:57:14.298400 2976 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Jan 23 17:57:14.435777 kubelet[2976]: E0123 17:57:14.435646 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:57:14.638564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906694148.mount: Deactivated successfully. Jan 23 17:57:14.651915 containerd[2025]: time="2026-01-23T17:57:14.651584737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:14.658914 containerd[2025]: time="2026-01-23T17:57:14.658831465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 17:57:14.664570 containerd[2025]: time="2026-01-23T17:57:14.664335301Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:14.671913 containerd[2025]: time="2026-01-23T17:57:14.671433697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:14.673115 containerd[2025]: time="2026-01-23T17:57:14.673074385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:57:14.675599 containerd[2025]: time="2026-01-23T17:57:14.675518725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:14.677192 containerd[2025]: time="2026-01-23T17:57:14.677128537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 538.756287ms" Jan 23 17:57:14.679706 containerd[2025]: time="2026-01-23T17:57:14.679506049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:57:14.680100 containerd[2025]: time="2026-01-23T17:57:14.680037697Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:14.688521 containerd[2025]: time="2026-01-23T17:57:14.688087321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 530.611707ms" Jan 23 17:57:14.691897 containerd[2025]: time="2026-01-23T17:57:14.691813825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 545.357379ms" Jan 23 17:57:14.745901 containerd[2025]: time="2026-01-23T17:57:14.744141805Z" level=info msg="connecting to shim 3c60a51b4211a5ce1af8896f6ea7abba79d4f67e61622b2ceff964af64fe7b94" address="unix:///run/containerd/s/d430923e715ed540f42e13bd2629585d192d739d80e1f2132aa174aacdc943d8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:14.766286 containerd[2025]: time="2026-01-23T17:57:14.766211893Z" level=info msg="connecting to shim 418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba" address="unix:///run/containerd/s/f58dc6b90715908e55c8207fbef236881e4eb65fa8c67220fd9534b144472b89" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:14.778720 containerd[2025]: time="2026-01-23T17:57:14.778644133Z" level=info msg="connecting to shim 6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698" address="unix:///run/containerd/s/149a3be8b8665d4a88ef6ebe26f73051546d25abd9bc62f292603369bbcbd8d7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:14.824192 systemd[1]: Started cri-containerd-3c60a51b4211a5ce1af8896f6ea7abba79d4f67e61622b2ceff964af64fe7b94.scope - libcontainer container 3c60a51b4211a5ce1af8896f6ea7abba79d4f67e61622b2ceff964af64fe7b94. Jan 23 17:57:14.838553 systemd[1]: Started cri-containerd-418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba.scope - libcontainer container 418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba. Jan 23 17:57:14.865048 systemd[1]: Started cri-containerd-6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698.scope - libcontainer container 6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698. Jan 23 17:57:14.873018 kubelet[2976]: E0123 17:57:14.872934 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" interval="1.6s" Jan 23 17:57:14.977981 kubelet[2976]: E0123 17:57:14.977793 2976 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:57:14.985472 containerd[2025]: time="2026-01-23T17:57:14.985336646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-80,Uid:a96c89cc58a3ca02fd4d415dff4d970a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c60a51b4211a5ce1af8896f6ea7abba79d4f67e61622b2ceff964af64fe7b94\"" Jan 23 17:57:15.007429 containerd[2025]: time="2026-01-23T17:57:15.007362826Z" level=info msg="CreateContainer within sandbox \"3c60a51b4211a5ce1af8896f6ea7abba79d4f67e61622b2ceff964af64fe7b94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:57:15.010283 containerd[2025]: time="2026-01-23T17:57:15.010129678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-80,Uid:ac0f2d78a7307bd3c0a169bc9aa73f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba\"" Jan 23 17:57:15.022631 containerd[2025]: time="2026-01-23T17:57:15.022581575Z" level=info msg="CreateContainer within sandbox \"418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:57:15.024728 containerd[2025]: time="2026-01-23T17:57:15.024453239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-80,Uid:40874e87cdc7f0c6ddd99c42986ce203,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698\"" Jan 23 17:57:15.035470 containerd[2025]: time="2026-01-23T17:57:15.035405531Z" level=info msg="CreateContainer within sandbox \"6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:57:15.037923 containerd[2025]: time="2026-01-23T17:57:15.036325019Z" level=info msg="Container 90399d0cdf58476e97278cc9b06f1d55243d26e3788887fc68d8c3acc3ddecb6: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:15.047746 containerd[2025]: time="2026-01-23T17:57:15.047694047Z" level=info msg="Container b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:15.058322 containerd[2025]: time="2026-01-23T17:57:15.058267535Z" level=info msg="CreateContainer within sandbox \"3c60a51b4211a5ce1af8896f6ea7abba79d4f67e61622b2ceff964af64fe7b94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"90399d0cdf58476e97278cc9b06f1d55243d26e3788887fc68d8c3acc3ddecb6\"" Jan 23 17:57:15.060048 containerd[2025]: time="2026-01-23T17:57:15.059961263Z" level=info msg="StartContainer for \"90399d0cdf58476e97278cc9b06f1d55243d26e3788887fc68d8c3acc3ddecb6\"" Jan 23 17:57:15.062178 containerd[2025]: time="2026-01-23T17:57:15.062115707Z" level=info msg="connecting to shim 90399d0cdf58476e97278cc9b06f1d55243d26e3788887fc68d8c3acc3ddecb6" address="unix:///run/containerd/s/d430923e715ed540f42e13bd2629585d192d739d80e1f2132aa174aacdc943d8" protocol=ttrpc version=3 Jan 23 17:57:15.069327 containerd[2025]: time="2026-01-23T17:57:15.069255671Z" level=info msg="CreateContainer within sandbox \"418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4\"" Jan 23 17:57:15.070180 containerd[2025]: time="2026-01-23T17:57:15.070135835Z" level=info msg="StartContainer for \"b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4\"" Jan 23 17:57:15.074251 containerd[2025]: time="2026-01-23T17:57:15.074139659Z" level=info msg="connecting to shim b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4" address="unix:///run/containerd/s/f58dc6b90715908e55c8207fbef236881e4eb65fa8c67220fd9534b144472b89" protocol=ttrpc version=3 Jan 23 17:57:15.080989 containerd[2025]: time="2026-01-23T17:57:15.080293775Z" level=info msg="Container f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:15.102343 kubelet[2976]: I0123 17:57:15.102289 2976 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-80" Jan 23 17:57:15.104250 kubelet[2976]: E0123 17:57:15.104181 2976 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.80:6443/api/v1/nodes\": dial tcp 172.31.24.80:6443: connect: connection refused" node="ip-172-31-24-80" Jan 23 17:57:15.106931 containerd[2025]: time="2026-01-23T17:57:15.106088843Z" level=info msg="CreateContainer within sandbox \"6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8\"" Jan 23 17:57:15.106624 systemd[1]: Started cri-containerd-90399d0cdf58476e97278cc9b06f1d55243d26e3788887fc68d8c3acc3ddecb6.scope - libcontainer container 90399d0cdf58476e97278cc9b06f1d55243d26e3788887fc68d8c3acc3ddecb6. Jan 23 17:57:15.111354 containerd[2025]: time="2026-01-23T17:57:15.111308183Z" level=info msg="StartContainer for \"f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8\"" Jan 23 17:57:15.133445 containerd[2025]: time="2026-01-23T17:57:15.133367051Z" level=info msg="connecting to shim f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8" address="unix:///run/containerd/s/149a3be8b8665d4a88ef6ebe26f73051546d25abd9bc62f292603369bbcbd8d7" protocol=ttrpc version=3 Jan 23 17:57:15.134181 systemd[1]: Started cri-containerd-b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4.scope - libcontainer container b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4. Jan 23 17:57:15.188463 systemd[1]: Started cri-containerd-f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8.scope - libcontainer container f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8. Jan 23 17:57:15.282375 containerd[2025]: time="2026-01-23T17:57:15.282266964Z" level=info msg="StartContainer for \"90399d0cdf58476e97278cc9b06f1d55243d26e3788887fc68d8c3acc3ddecb6\" returns successfully" Jan 23 17:57:15.318161 containerd[2025]: time="2026-01-23T17:57:15.318092796Z" level=info msg="StartContainer for \"b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4\" returns successfully" Jan 23 17:57:15.354365 containerd[2025]: time="2026-01-23T17:57:15.354257892Z" level=info msg="StartContainer for \"f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8\" returns successfully" Jan 23 17:57:15.418609 kubelet[2976]: E0123 17:57:15.418532 2976 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:57:15.530826 kubelet[2976]: E0123 17:57:15.530772 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:15.536952 kubelet[2976]: E0123 17:57:15.536791 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:15.544264 kubelet[2976]: E0123 17:57:15.544213 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:16.547400 kubelet[2976]: E0123 17:57:16.547006 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:16.547400 kubelet[2976]: E0123 17:57:16.547077 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:16.547954 kubelet[2976]: E0123 17:57:16.547467 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:16.708933 kubelet[2976]: I0123 17:57:16.708753 2976 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-80" Jan 23 17:57:19.253555 update_engine[2002]: I20260123 17:57:19.251922 2002 update_attempter.cc:509] Updating boot flags... Jan 23 17:57:20.087364 kubelet[2976]: E0123 17:57:20.087102 2976 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-80\" not found" node="ip-172-31-24-80" Jan 23 17:57:20.479986 kubelet[2976]: I0123 17:57:20.478608 2976 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-80" Jan 23 17:57:20.480204 kubelet[2976]: E0123 17:57:20.480175 2976 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-80\": node \"ip-172-31-24-80\" not found" Jan 23 17:57:20.524195 kubelet[2976]: E0123 17:57:20.524063 2976 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-80.188d6ddf0fcda2c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-80,UID:ip-172-31-24-80,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-80,},FirstTimestamp:2026-01-23 17:57:13.436742339 +0000 UTC m=+1.190960203,LastTimestamp:2026-01-23 17:57:13.436742339 +0000 UTC m=+1.190960203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-80,}" Jan 23 17:57:20.558012 kubelet[2976]: I0123 17:57:20.557964 2976 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:20.577224 kubelet[2976]: E0123 17:57:20.577151 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 23 17:57:20.592390 kubelet[2976]: E0123 17:57:20.592064 2976 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-80\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:20.592390 kubelet[2976]: I0123 17:57:20.592110 2976 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:20.596571 kubelet[2976]: E0123 17:57:20.596528 2976 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-80\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:20.596893 kubelet[2976]: I0123 17:57:20.596761 2976 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-80" Jan 23 17:57:20.600917 kubelet[2976]: E0123 17:57:20.600839 2976 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-80\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-80" Jan 23 17:57:21.434890 kubelet[2976]: I0123 17:57:21.434494 2976 apiserver.go:52] "Watching apiserver" Jan 23 17:57:21.467069 kubelet[2976]: I0123 17:57:21.466996 2976 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:57:22.726300 systemd[1]: Reload requested from client PID 3445 ('systemctl') (unit session-7.scope)... Jan 23 17:57:22.726331 systemd[1]: Reloading... Jan 23 17:57:22.964918 zram_generator::config[3495]: No configuration found. Jan 23 17:57:23.457400 systemd[1]: Reloading finished in 730 ms. Jan 23 17:57:23.512561 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:23.526334 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:57:23.528968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:23.529057 systemd[1]: kubelet.service: Consumed 2.025s CPU time, 119.7M memory peak. Jan 23 17:57:23.533677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:23.912939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:23.936517 (kubelet)[3549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:57:24.042131 kubelet[3549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:57:24.043622 kubelet[3549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:24.043622 kubelet[3549]: I0123 17:57:24.042802 3549 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:57:24.057389 kubelet[3549]: I0123 17:57:24.057328 3549 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:57:24.057580 kubelet[3549]: I0123 17:57:24.057561 3549 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:57:24.057720 kubelet[3549]: I0123 17:57:24.057702 3549 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:57:24.057833 kubelet[3549]: I0123 17:57:24.057813 3549 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:57:24.058342 kubelet[3549]: I0123 17:57:24.058318 3549 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:57:24.063332 kubelet[3549]: I0123 17:57:24.063263 3549 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:57:24.073188 kubelet[3549]: I0123 17:57:24.073134 3549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:57:24.082696 kubelet[3549]: I0123 17:57:24.082434 3549 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:57:24.088508 kubelet[3549]: I0123 17:57:24.088472 3549 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:57:24.089113 kubelet[3549]: I0123 17:57:24.089064 3549 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:57:24.089545 kubelet[3549]: I0123 17:57:24.089243 3549 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-80","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:57:24.089763 kubelet[3549]: I0123 17:57:24.089740 3549 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:57:24.089894 kubelet[3549]: I0123 17:57:24.089857 3549 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:57:24.090055 kubelet[3549]: I0123 17:57:24.090035 3549 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:57:24.092523 kubelet[3549]: I0123 17:57:24.092482 3549 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:24.093326 kubelet[3549]: I0123 17:57:24.092976 3549 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:57:24.093326 kubelet[3549]: I0123 17:57:24.093006 3549 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:57:24.093326 kubelet[3549]: I0123 17:57:24.093048 3549 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:57:24.093326 kubelet[3549]: I0123 17:57:24.093068 3549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:57:24.101114 kubelet[3549]: I0123 17:57:24.101062 3549 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:57:24.102547 sudo[3563]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 17:57:24.104251 sudo[3563]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 17:57:24.104903 kubelet[3549]: I0123 17:57:24.104838 3549 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:57:24.105055 kubelet[3549]: I0123 17:57:24.104927 3549 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:57:24.118895 kubelet[3549]: I0123 17:57:24.118763 3549 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:57:24.119040 kubelet[3549]: I0123 17:57:24.118913 3549 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:57:24.119394 kubelet[3549]: I0123 17:57:24.119348 3549 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:57:24.122909 kubelet[3549]: I0123 17:57:24.115472 3549 server.go:1262] "Started kubelet" Jan 23 17:57:24.124943 kubelet[3549]: I0123 17:57:24.124599 3549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:57:24.137659 kubelet[3549]: I0123 17:57:24.137527 3549 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:57:24.142900 kubelet[3549]: I0123 17:57:24.139393 3549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:57:24.147669 kubelet[3549]: I0123 17:57:24.147204 3549 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:57:24.152281 kubelet[3549]: I0123 17:57:24.152107 3549 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:57:24.154169 kubelet[3549]: E0123 17:57:24.154117 3549 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-80\" not found" Jan 23 17:57:24.164955 kubelet[3549]: I0123 17:57:24.164438 3549 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:57:24.164955 kubelet[3549]: I0123 17:57:24.165434 3549 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:57:24.226082 kubelet[3549]: I0123 17:57:24.226030 3549 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:57:24.226082 kubelet[3549]: I0123 17:57:24.226070 3549 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:57:24.226274 kubelet[3549]: I0123 17:57:24.226229 3549 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:57:24.246251 kubelet[3549]: E0123 17:57:24.246167 3549 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:57:24.289685 kubelet[3549]: I0123 17:57:24.289413 3549 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:57:24.310694 kubelet[3549]: I0123 17:57:24.310454 3549 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:57:24.310694 kubelet[3549]: I0123 17:57:24.310503 3549 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:57:24.310694 kubelet[3549]: I0123 17:57:24.310539 3549 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:57:24.310694 kubelet[3549]: E0123 17:57:24.310613 3549 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:57:24.411467 kubelet[3549]: E0123 17:57:24.411217 3549 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 17:57:24.431517 kubelet[3549]: I0123 17:57:24.431301 3549 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:57:24.431517 kubelet[3549]: I0123 17:57:24.431348 3549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:57:24.431517 kubelet[3549]: I0123 17:57:24.431390 3549 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:24.434604 kubelet[3549]: I0123 17:57:24.434525 3549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:57:24.434604 kubelet[3549]: I0123 17:57:24.434575 3549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:57:24.434604 kubelet[3549]: I0123 17:57:24.434611 3549 policy_none.go:49] "None policy: Start" Jan 23 17:57:24.434835 kubelet[3549]: I0123 17:57:24.434631 3549 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:57:24.434835 kubelet[3549]: I0123 17:57:24.434653 3549 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:57:24.434984 kubelet[3549]: I0123 17:57:24.434841 3549 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 17:57:24.434984 kubelet[3549]: I0123 17:57:24.434859 3549 policy_none.go:47] "Start" Jan 23 17:57:24.453740 kubelet[3549]: E0123 17:57:24.453677 3549 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:57:24.456856 kubelet[3549]: I0123 17:57:24.456644 3549 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:57:24.456856 kubelet[3549]: I0123 17:57:24.456699 3549 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:57:24.458704 kubelet[3549]: I0123 17:57:24.458593 3549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:57:24.481436 kubelet[3549]: E0123 17:57:24.481081 3549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:57:24.592404 kubelet[3549]: I0123 17:57:24.592354 3549 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-80" Jan 23 17:57:24.613621 kubelet[3549]: I0123 17:57:24.613558 3549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-80" Jan 23 17:57:24.616074 kubelet[3549]: I0123 17:57:24.616016 3549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:24.616517 kubelet[3549]: I0123 17:57:24.616481 3549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:24.622835 kubelet[3549]: I0123 17:57:24.619932 3549 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-80" Jan 23 17:57:24.622835 kubelet[3549]: I0123 17:57:24.620060 3549 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-80" Jan 23 17:57:24.676796 kubelet[3549]: I0123 17:57:24.676730 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:24.676972 kubelet[3549]: I0123 17:57:24.676817 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac0f2d78a7307bd3c0a169bc9aa73f7d-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-80\" (UID: \"ac0f2d78a7307bd3c0a169bc9aa73f7d\") " pod="kube-system/kube-scheduler-ip-172-31-24-80" Jan 23 17:57:24.676972 kubelet[3549]: I0123 17:57:24.676860 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a96c89cc58a3ca02fd4d415dff4d970a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a96c89cc58a3ca02fd4d415dff4d970a\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:24.676972 kubelet[3549]: I0123 17:57:24.676938 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:24.677148 kubelet[3549]: I0123 17:57:24.676977 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:24.677148 kubelet[3549]: I0123 17:57:24.677026 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a96c89cc58a3ca02fd4d415dff4d970a-ca-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a96c89cc58a3ca02fd4d415dff4d970a\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:24.677148 kubelet[3549]: I0123 17:57:24.677064 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a96c89cc58a3ca02fd4d415dff4d970a-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-80\" (UID: \"a96c89cc58a3ca02fd4d415dff4d970a\") " pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:24.677148 kubelet[3549]: I0123 17:57:24.677099 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:24.677148 kubelet[3549]: I0123 17:57:24.677134 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40874e87cdc7f0c6ddd99c42986ce203-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-80\" (UID: \"40874e87cdc7f0c6ddd99c42986ce203\") " pod="kube-system/kube-controller-manager-ip-172-31-24-80" Jan 23 17:57:24.926809 sudo[3563]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:25.106475 kubelet[3549]: I0123 17:57:25.106337 3549 apiserver.go:52] "Watching apiserver" Jan 23 17:57:25.166277 kubelet[3549]: I0123 17:57:25.166222 3549 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:57:25.363327 kubelet[3549]: I0123 17:57:25.363228 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-80" podStartSLOduration=1.36320839 podStartE2EDuration="1.36320839s" podCreationTimestamp="2026-01-23 17:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:25.362642962 +0000 UTC m=+1.415785916" watchObservedRunningTime="2026-01-23 17:57:25.36320839 +0000 UTC m=+1.416351332" Jan 23 17:57:25.363802 kubelet[3549]: I0123 17:57:25.363396 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-80" podStartSLOduration=1.3633856180000001 podStartE2EDuration="1.363385618s" podCreationTimestamp="2026-01-23 17:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:25.343307254 +0000 UTC m=+1.396450208" watchObservedRunningTime="2026-01-23 17:57:25.363385618 +0000 UTC m=+1.416528572" Jan 23 17:57:25.379899 kubelet[3549]: I0123 17:57:25.379620 3549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:25.384898 kubelet[3549]: I0123 17:57:25.382672 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-80" podStartSLOduration=1.382651882 podStartE2EDuration="1.382651882s" podCreationTimestamp="2026-01-23 17:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:25.37847059 +0000 UTC m=+1.431613544" watchObservedRunningTime="2026-01-23 17:57:25.382651882 +0000 UTC m=+1.435794836" Jan 23 17:57:25.399169 kubelet[3549]: E0123 17:57:25.397813 3549 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-80\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-80" Jan 23 17:57:27.745043 kubelet[3549]: I0123 17:57:27.744993 3549 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:57:27.745627 containerd[2025]: time="2026-01-23T17:57:27.745496990Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:57:27.746129 kubelet[3549]: I0123 17:57:27.745997 3549 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:57:27.792226 sudo[2381]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:27.875169 sshd[2380]: Connection closed by 68.220.241.50 port 58492 Jan 23 17:57:27.876004 sshd-session[2377]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:27.885136 systemd[1]: sshd@6-172.31.24.80:22-68.220.241.50:58492.service: Deactivated successfully. Jan 23 17:57:27.891285 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:57:27.894073 systemd[1]: session-7.scope: Consumed 13.823s CPU time, 265.7M memory peak. Jan 23 17:57:27.896688 systemd-logind[2001]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:57:27.900456 systemd-logind[2001]: Removed session 7. Jan 23 17:57:28.654944 kubelet[3549]: E0123 17:57:28.654523 3549 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-4zkzq\" is forbidden: User \"system:node:ip-172-31-24-80\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-80' and this object" podUID="fa9a7834-d84a-4caf-bac2-8253f031b62d" pod="kube-system/kube-proxy-4zkzq" Jan 23 17:57:28.654944 kubelet[3549]: E0123 17:57:28.654671 3549 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-24-80\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-80' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Jan 23 17:57:28.654944 kubelet[3549]: E0123 17:57:28.654769 3549 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-24-80\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-80' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 23 17:57:28.657561 systemd[1]: Created slice kubepods-besteffort-podfa9a7834_d84a_4caf_bac2_8253f031b62d.slice - libcontainer container kubepods-besteffort-podfa9a7834_d84a_4caf_bac2_8253f031b62d.slice. Jan 23 17:57:28.688812 systemd[1]: Created slice kubepods-burstable-podd2f43bdd_6c9a_4f6e_952a_1f83a91833e4.slice - libcontainer container kubepods-burstable-podd2f43bdd_6c9a_4f6e_952a_1f83a91833e4.slice. Jan 23 17:57:28.706086 kubelet[3549]: I0123 17:57:28.706001 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa9a7834-d84a-4caf-bac2-8253f031b62d-kube-proxy\") pod \"kube-proxy-4zkzq\" (UID: \"fa9a7834-d84a-4caf-bac2-8253f031b62d\") " pod="kube-system/kube-proxy-4zkzq" Jan 23 17:57:28.706310 kubelet[3549]: I0123 17:57:28.706282 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa9a7834-d84a-4caf-bac2-8253f031b62d-xtables-lock\") pod \"kube-proxy-4zkzq\" (UID: \"fa9a7834-d84a-4caf-bac2-8253f031b62d\") " pod="kube-system/kube-proxy-4zkzq" Jan 23 17:57:28.706489 kubelet[3549]: I0123 17:57:28.706386 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-lib-modules\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.707895 kubelet[3549]: I0123 17:57:28.706599 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-clustermesh-secrets\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.708182 kubelet[3549]: I0123 17:57:28.708097 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-cgroup\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.708350 kubelet[3549]: I0123 17:57:28.708324 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-config-path\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.708535 kubelet[3549]: I0123 17:57:28.708497 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-net\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.708711 kubelet[3549]: I0123 17:57:28.708664 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9t7\" (UniqueName: \"kubernetes.io/projected/fa9a7834-d84a-4caf-bac2-8253f031b62d-kube-api-access-4g9t7\") pod \"kube-proxy-4zkzq\" (UID: \"fa9a7834-d84a-4caf-bac2-8253f031b62d\") " pod="kube-system/kube-proxy-4zkzq" Jan 23 17:57:28.708858 kubelet[3549]: I0123 17:57:28.708835 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hostproc\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709047 kubelet[3549]: I0123 17:57:28.709003 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cni-path\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709135 kubelet[3549]: I0123 17:57:28.709073 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-etc-cni-netd\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709135 kubelet[3549]: I0123 17:57:28.709121 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-xtables-lock\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709244 kubelet[3549]: I0123 17:57:28.709165 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-kernel\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709244 kubelet[3549]: I0123 17:57:28.709200 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hubble-tls\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709338 kubelet[3549]: I0123 17:57:28.709251 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpwpg\" (UniqueName: \"kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-kube-api-access-hpwpg\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709421 kubelet[3549]: I0123 17:57:28.709334 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa9a7834-d84a-4caf-bac2-8253f031b62d-lib-modules\") pod \"kube-proxy-4zkzq\" (UID: \"fa9a7834-d84a-4caf-bac2-8253f031b62d\") " pod="kube-system/kube-proxy-4zkzq" Jan 23 17:57:28.709421 kubelet[3549]: I0123 17:57:28.709394 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-run\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.709527 kubelet[3549]: I0123 17:57:28.709430 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-bpf-maps\") pod \"cilium-26kqd\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " pod="kube-system/cilium-26kqd" Jan 23 17:57:28.780790 systemd[1]: Created slice kubepods-besteffort-pod15a17f4e_2b13_4157_9e34_4b3b31367d03.slice - libcontainer container kubepods-besteffort-pod15a17f4e_2b13_4157_9e34_4b3b31367d03.slice. Jan 23 17:57:28.810108 kubelet[3549]: I0123 17:57:28.810046 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrcmh\" (UniqueName: \"kubernetes.io/projected/15a17f4e-2b13-4157-9e34-4b3b31367d03-kube-api-access-nrcmh\") pod \"cilium-operator-6f9c7c5859-n2m87\" (UID: \"15a17f4e-2b13-4157-9e34-4b3b31367d03\") " pod="kube-system/cilium-operator-6f9c7c5859-n2m87" Jan 23 17:57:28.810639 kubelet[3549]: I0123 17:57:28.810219 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15a17f4e-2b13-4157-9e34-4b3b31367d03-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-n2m87\" (UID: \"15a17f4e-2b13-4157-9e34-4b3b31367d03\") " pod="kube-system/cilium-operator-6f9c7c5859-n2m87" Jan 23 17:57:29.811784 kubelet[3549]: E0123 17:57:29.811714 3549 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.812396 kubelet[3549]: E0123 17:57:29.811901 3549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa9a7834-d84a-4caf-bac2-8253f031b62d-kube-proxy podName:fa9a7834-d84a-4caf-bac2-8253f031b62d nodeName:}" failed. No retries permitted until 2026-01-23 17:57:30.311839364 +0000 UTC m=+6.364982306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fa9a7834-d84a-4caf-bac2-8253f031b62d-kube-proxy") pod "kube-proxy-4zkzq" (UID: "fa9a7834-d84a-4caf-bac2-8253f031b62d") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.872841 kubelet[3549]: E0123 17:57:29.872666 3549 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.872841 kubelet[3549]: E0123 17:57:29.872713 3549 projected.go:196] Error preparing data for projected volume kube-api-access-hpwpg for pod kube-system/cilium-26kqd: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.873134 kubelet[3549]: E0123 17:57:29.872809 3549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-kube-api-access-hpwpg podName:d2f43bdd-6c9a-4f6e-952a-1f83a91833e4 nodeName:}" failed. No retries permitted until 2026-01-23 17:57:30.372783056 +0000 UTC m=+6.425926010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hpwpg" (UniqueName: "kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-kube-api-access-hpwpg") pod "cilium-26kqd" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.877417 kubelet[3549]: E0123 17:57:29.876994 3549 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.877417 kubelet[3549]: E0123 17:57:29.877050 3549 projected.go:196] Error preparing data for projected volume kube-api-access-4g9t7 for pod kube-system/kube-proxy-4zkzq: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.877417 kubelet[3549]: E0123 17:57:29.877137 3549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa9a7834-d84a-4caf-bac2-8253f031b62d-kube-api-access-4g9t7 podName:fa9a7834-d84a-4caf-bac2-8253f031b62d nodeName:}" failed. No retries permitted until 2026-01-23 17:57:30.377110724 +0000 UTC m=+6.430253678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4g9t7" (UniqueName: "kubernetes.io/projected/fa9a7834-d84a-4caf-bac2-8253f031b62d-kube-api-access-4g9t7") pod "kube-proxy-4zkzq" (UID: "fa9a7834-d84a-4caf-bac2-8253f031b62d") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.924216 kubelet[3549]: E0123 17:57:29.924150 3549 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.924216 kubelet[3549]: E0123 17:57:29.924208 3549 projected.go:196] Error preparing data for projected volume kube-api-access-nrcmh for pod kube-system/cilium-operator-6f9c7c5859-n2m87: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:29.924451 kubelet[3549]: E0123 17:57:29.924311 3549 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/15a17f4e-2b13-4157-9e34-4b3b31367d03-kube-api-access-nrcmh podName:15a17f4e-2b13-4157-9e34-4b3b31367d03 nodeName:}" failed. No retries permitted until 2026-01-23 17:57:30.424282845 +0000 UTC m=+6.477425799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrcmh" (UniqueName: "kubernetes.io/projected/15a17f4e-2b13-4157-9e34-4b3b31367d03-kube-api-access-nrcmh") pod "cilium-operator-6f9c7c5859-n2m87" (UID: "15a17f4e-2b13-4157-9e34-4b3b31367d03") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:30.476652 containerd[2025]: time="2026-01-23T17:57:30.476578251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zkzq,Uid:fa9a7834-d84a-4caf-bac2-8253f031b62d,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:30.511069 containerd[2025]: time="2026-01-23T17:57:30.511012491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26kqd,Uid:d2f43bdd-6c9a-4f6e-952a-1f83a91833e4,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:30.519745 containerd[2025]: time="2026-01-23T17:57:30.519676899Z" level=info msg="connecting to shim 81e94d31cca7e09794e9efae57550ad60fc37d417d21c6f9adc5c0c6eeb5e8fe" address="unix:///run/containerd/s/a2186ffe4dca054f8783997fd95b9b786e6bdc525540e1eab837302c9667603b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:30.591402 containerd[2025]: time="2026-01-23T17:57:30.591100624Z" level=info msg="connecting to shim d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9" address="unix:///run/containerd/s/3bd98a433a0dfe685f15e792a1ddf6e414add96f798d73db926d6eb457718fb5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:30.598150 containerd[2025]: time="2026-01-23T17:57:30.597603436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-n2m87,Uid:15a17f4e-2b13-4157-9e34-4b3b31367d03,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:30.611214 systemd[1]: Started cri-containerd-81e94d31cca7e09794e9efae57550ad60fc37d417d21c6f9adc5c0c6eeb5e8fe.scope - libcontainer container 81e94d31cca7e09794e9efae57550ad60fc37d417d21c6f9adc5c0c6eeb5e8fe. Jan 23 17:57:30.661214 containerd[2025]: time="2026-01-23T17:57:30.661141600Z" level=info msg="connecting to shim 07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762" address="unix:///run/containerd/s/06275dfee1cd0e9ce42792846f1fe4f2ada2cd7e0adf84e68cbafdc6af6a5ccf" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:30.663390 systemd[1]: Started cri-containerd-d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9.scope - libcontainer container d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9. Jan 23 17:57:30.722463 containerd[2025]: time="2026-01-23T17:57:30.722407709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zkzq,Uid:fa9a7834-d84a-4caf-bac2-8253f031b62d,Namespace:kube-system,Attempt:0,} returns sandbox id \"81e94d31cca7e09794e9efae57550ad60fc37d417d21c6f9adc5c0c6eeb5e8fe\"" Jan 23 17:57:30.741061 containerd[2025]: time="2026-01-23T17:57:30.740866733Z" level=info msg="CreateContainer within sandbox \"81e94d31cca7e09794e9efae57550ad60fc37d417d21c6f9adc5c0c6eeb5e8fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:57:30.758466 systemd[1]: Started cri-containerd-07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762.scope - libcontainer container 07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762. Jan 23 17:57:30.773943 containerd[2025]: time="2026-01-23T17:57:30.773798177Z" level=info msg="Container 0b6913c18344b3caa08c68bf9730ed82be16037f06652775f7bc2476a4780952: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:30.776096 containerd[2025]: time="2026-01-23T17:57:30.776028053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26kqd,Uid:d2f43bdd-6c9a-4f6e-952a-1f83a91833e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\"" Jan 23 17:57:30.781738 containerd[2025]: time="2026-01-23T17:57:30.781662533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 17:57:30.802041 containerd[2025]: time="2026-01-23T17:57:30.801915125Z" level=info msg="CreateContainer within sandbox \"81e94d31cca7e09794e9efae57550ad60fc37d417d21c6f9adc5c0c6eeb5e8fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b6913c18344b3caa08c68bf9730ed82be16037f06652775f7bc2476a4780952\"" Jan 23 17:57:30.804064 containerd[2025]: time="2026-01-23T17:57:30.804013505Z" level=info msg="StartContainer for \"0b6913c18344b3caa08c68bf9730ed82be16037f06652775f7bc2476a4780952\"" Jan 23 17:57:30.811730 containerd[2025]: time="2026-01-23T17:57:30.811669013Z" level=info msg="connecting to shim 0b6913c18344b3caa08c68bf9730ed82be16037f06652775f7bc2476a4780952" address="unix:///run/containerd/s/a2186ffe4dca054f8783997fd95b9b786e6bdc525540e1eab837302c9667603b" protocol=ttrpc version=3 Jan 23 17:57:30.854519 systemd[1]: Started cri-containerd-0b6913c18344b3caa08c68bf9730ed82be16037f06652775f7bc2476a4780952.scope - libcontainer container 0b6913c18344b3caa08c68bf9730ed82be16037f06652775f7bc2476a4780952. Jan 23 17:57:30.881177 containerd[2025]: time="2026-01-23T17:57:30.880995941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-n2m87,Uid:15a17f4e-2b13-4157-9e34-4b3b31367d03,Namespace:kube-system,Attempt:0,} returns sandbox id \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\"" Jan 23 17:57:31.021300 containerd[2025]: time="2026-01-23T17:57:31.021232370Z" level=info msg="StartContainer for \"0b6913c18344b3caa08c68bf9730ed82be16037f06652775f7bc2476a4780952\" returns successfully" Jan 23 17:57:31.471938 kubelet[3549]: I0123 17:57:31.471718 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4zkzq" podStartSLOduration=3.471698752 podStartE2EDuration="3.471698752s" podCreationTimestamp="2026-01-23 17:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:31.471349756 +0000 UTC m=+7.524492710" watchObservedRunningTime="2026-01-23 17:57:31.471698752 +0000 UTC m=+7.524841694" Jan 23 17:57:37.697172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738142855.mount: Deactivated successfully. Jan 23 17:57:40.235419 containerd[2025]: time="2026-01-23T17:57:40.235272156Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:40.238993 containerd[2025]: time="2026-01-23T17:57:40.238867368Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 17:57:40.241615 containerd[2025]: time="2026-01-23T17:57:40.241440588Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:40.244487 containerd[2025]: time="2026-01-23T17:57:40.244298016Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.462573839s" Jan 23 17:57:40.244487 containerd[2025]: time="2026-01-23T17:57:40.244362600Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 17:57:40.247478 containerd[2025]: time="2026-01-23T17:57:40.247049256Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 17:57:40.255790 containerd[2025]: time="2026-01-23T17:57:40.255672696Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 17:57:40.274929 containerd[2025]: time="2026-01-23T17:57:40.274838940Z" level=info msg="Container cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:40.288572 containerd[2025]: time="2026-01-23T17:57:40.288485904Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\"" Jan 23 17:57:40.290195 containerd[2025]: time="2026-01-23T17:57:40.289835340Z" level=info msg="StartContainer for \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\"" Jan 23 17:57:40.294576 containerd[2025]: time="2026-01-23T17:57:40.294255336Z" level=info msg="connecting to shim cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5" address="unix:///run/containerd/s/3bd98a433a0dfe685f15e792a1ddf6e414add96f798d73db926d6eb457718fb5" protocol=ttrpc version=3 Jan 23 17:57:40.339187 systemd[1]: Started cri-containerd-cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5.scope - libcontainer container cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5. Jan 23 17:57:40.402305 containerd[2025]: time="2026-01-23T17:57:40.402196597Z" level=info msg="StartContainer for \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\" returns successfully" Jan 23 17:57:40.427054 systemd[1]: cri-containerd-cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5.scope: Deactivated successfully. Jan 23 17:57:40.434017 containerd[2025]: time="2026-01-23T17:57:40.433815637Z" level=info msg="received container exit event container_id:\"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\" id:\"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\" pid:3975 exited_at:{seconds:1769191060 nanos:433021621}" Jan 23 17:57:40.490006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5-rootfs.mount: Deactivated successfully. Jan 23 17:57:42.267984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579993405.mount: Deactivated successfully. Jan 23 17:57:42.497535 containerd[2025]: time="2026-01-23T17:57:42.497466183Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 17:57:42.527427 containerd[2025]: time="2026-01-23T17:57:42.527230539Z" level=info msg="Container 29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:42.528547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968085723.mount: Deactivated successfully. Jan 23 17:57:42.561162 containerd[2025]: time="2026-01-23T17:57:42.560316663Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\"" Jan 23 17:57:42.563388 containerd[2025]: time="2026-01-23T17:57:42.562834587Z" level=info msg="StartContainer for \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\"" Jan 23 17:57:42.571167 containerd[2025]: time="2026-01-23T17:57:42.571109319Z" level=info msg="connecting to shim 29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643" address="unix:///run/containerd/s/3bd98a433a0dfe685f15e792a1ddf6e414add96f798d73db926d6eb457718fb5" protocol=ttrpc version=3 Jan 23 17:57:42.635175 systemd[1]: Started cri-containerd-29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643.scope - libcontainer container 29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643. Jan 23 17:57:42.737059 containerd[2025]: time="2026-01-23T17:57:42.737010808Z" level=info msg="StartContainer for \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\" returns successfully" Jan 23 17:57:42.778357 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:57:42.780430 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:42.781250 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:42.785253 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:42.791272 systemd[1]: cri-containerd-29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643.scope: Deactivated successfully. Jan 23 17:57:42.799807 containerd[2025]: time="2026-01-23T17:57:42.799736572Z" level=info msg="received container exit event container_id:\"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\" id:\"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\" pid:4036 exited_at:{seconds:1769191062 nanos:798070564}" Jan 23 17:57:42.852793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:43.150289 containerd[2025]: time="2026-01-23T17:57:43.149254070Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:43.152126 containerd[2025]: time="2026-01-23T17:57:43.152080766Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 17:57:43.153092 containerd[2025]: time="2026-01-23T17:57:43.153035510Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:43.155284 containerd[2025]: time="2026-01-23T17:57:43.155220254Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.908106342s" Jan 23 17:57:43.155424 containerd[2025]: time="2026-01-23T17:57:43.155281862Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 17:57:43.163502 containerd[2025]: time="2026-01-23T17:57:43.163423682Z" level=info msg="CreateContainer within sandbox \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 17:57:43.173844 containerd[2025]: time="2026-01-23T17:57:43.172746926Z" level=info msg="Container c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:43.193238 containerd[2025]: time="2026-01-23T17:57:43.193165970Z" level=info msg="CreateContainer within sandbox \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\"" Jan 23 17:57:43.195774 containerd[2025]: time="2026-01-23T17:57:43.194206442Z" level=info msg="StartContainer for \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\"" Jan 23 17:57:43.196196 containerd[2025]: time="2026-01-23T17:57:43.196150106Z" level=info msg="connecting to shim c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3" address="unix:///run/containerd/s/06275dfee1cd0e9ce42792846f1fe4f2ada2cd7e0adf84e68cbafdc6af6a5ccf" protocol=ttrpc version=3 Jan 23 17:57:43.226199 systemd[1]: Started cri-containerd-c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3.scope - libcontainer container c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3. Jan 23 17:57:43.252083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643-rootfs.mount: Deactivated successfully. Jan 23 17:57:43.300260 containerd[2025]: time="2026-01-23T17:57:43.299958903Z" level=info msg="StartContainer for \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" returns successfully" Jan 23 17:57:43.510186 containerd[2025]: time="2026-01-23T17:57:43.508867864Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 17:57:43.536216 containerd[2025]: time="2026-01-23T17:57:43.536137240Z" level=info msg="Container bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:43.573190 containerd[2025]: time="2026-01-23T17:57:43.572864116Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\"" Jan 23 17:57:43.575533 containerd[2025]: time="2026-01-23T17:57:43.575423704Z" level=info msg="StartContainer for \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\"" Jan 23 17:57:43.583173 containerd[2025]: time="2026-01-23T17:57:43.581850352Z" level=info msg="connecting to shim bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a" address="unix:///run/containerd/s/3bd98a433a0dfe685f15e792a1ddf6e414add96f798d73db926d6eb457718fb5" protocol=ttrpc version=3 Jan 23 17:57:43.648638 systemd[1]: Started cri-containerd-bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a.scope - libcontainer container bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a. Jan 23 17:57:43.676618 kubelet[3549]: I0123 17:57:43.676478 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-n2m87" podStartSLOduration=3.40536094 podStartE2EDuration="15.676453277s" podCreationTimestamp="2026-01-23 17:57:28 +0000 UTC" firstStartedPulling="2026-01-23 17:57:30.886100153 +0000 UTC m=+6.939243107" lastFinishedPulling="2026-01-23 17:57:43.157192502 +0000 UTC m=+19.210335444" observedRunningTime="2026-01-23 17:57:43.55482136 +0000 UTC m=+19.607964326" watchObservedRunningTime="2026-01-23 17:57:43.676453277 +0000 UTC m=+19.729596243" Jan 23 17:57:43.901149 containerd[2025]: time="2026-01-23T17:57:43.900995034Z" level=info msg="StartContainer for \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\" returns successfully" Jan 23 17:57:43.930138 systemd[1]: cri-containerd-bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a.scope: Deactivated successfully. Jan 23 17:57:43.939306 containerd[2025]: time="2026-01-23T17:57:43.939236970Z" level=info msg="received container exit event container_id:\"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\" id:\"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\" pid:4121 exited_at:{seconds:1769191063 nanos:938929422}" Jan 23 17:57:44.007708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a-rootfs.mount: Deactivated successfully. Jan 23 17:57:44.527902 containerd[2025]: time="2026-01-23T17:57:44.526078757Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 17:57:44.555288 containerd[2025]: time="2026-01-23T17:57:44.555234689Z" level=info msg="Container 0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:44.586560 containerd[2025]: time="2026-01-23T17:57:44.586492541Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\"" Jan 23 17:57:44.593942 containerd[2025]: time="2026-01-23T17:57:44.592209521Z" level=info msg="StartContainer for \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\"" Jan 23 17:57:44.602435 containerd[2025]: time="2026-01-23T17:57:44.602379605Z" level=info msg="connecting to shim 0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888" address="unix:///run/containerd/s/3bd98a433a0dfe685f15e792a1ddf6e414add96f798d73db926d6eb457718fb5" protocol=ttrpc version=3 Jan 23 17:57:44.696635 systemd[1]: Started cri-containerd-0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888.scope - libcontainer container 0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888. Jan 23 17:57:44.847352 systemd[1]: cri-containerd-0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888.scope: Deactivated successfully. Jan 23 17:57:44.852394 containerd[2025]: time="2026-01-23T17:57:44.852263863Z" level=info msg="StartContainer for \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\" returns successfully" Jan 23 17:57:44.856163 containerd[2025]: time="2026-01-23T17:57:44.856090183Z" level=info msg="received container exit event container_id:\"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\" id:\"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\" pid:4161 exited_at:{seconds:1769191064 nanos:855529159}" Jan 23 17:57:44.944206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888-rootfs.mount: Deactivated successfully. Jan 23 17:57:45.536765 containerd[2025]: time="2026-01-23T17:57:45.536602806Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 17:57:45.568571 containerd[2025]: time="2026-01-23T17:57:45.568497342Z" level=info msg="Container 65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:45.578940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4172660917.mount: Deactivated successfully. Jan 23 17:57:45.594447 containerd[2025]: time="2026-01-23T17:57:45.594369918Z" level=info msg="CreateContainer within sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\"" Jan 23 17:57:45.596171 containerd[2025]: time="2026-01-23T17:57:45.596102778Z" level=info msg="StartContainer for \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\"" Jan 23 17:57:45.598903 containerd[2025]: time="2026-01-23T17:57:45.598227150Z" level=info msg="connecting to shim 65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638" address="unix:///run/containerd/s/3bd98a433a0dfe685f15e792a1ddf6e414add96f798d73db926d6eb457718fb5" protocol=ttrpc version=3 Jan 23 17:57:45.645671 systemd[1]: Started cri-containerd-65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638.scope - libcontainer container 65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638. Jan 23 17:57:45.722843 containerd[2025]: time="2026-01-23T17:57:45.722770831Z" level=info msg="StartContainer for \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" returns successfully" Jan 23 17:57:45.883281 kubelet[3549]: I0123 17:57:45.882646 3549 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 17:57:45.955589 systemd[1]: Created slice kubepods-burstable-pod23d9e418_87f2_43f8_9aaf_3ad1cb5fc659.slice - libcontainer container kubepods-burstable-pod23d9e418_87f2_43f8_9aaf_3ad1cb5fc659.slice. Jan 23 17:57:45.979205 systemd[1]: Created slice kubepods-burstable-pod80ea793e_5208_4438_bedd_614d4e7c445e.slice - libcontainer container kubepods-burstable-pod80ea793e_5208_4438_bedd_614d4e7c445e.slice. Jan 23 17:57:46.070788 kubelet[3549]: I0123 17:57:46.070587 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tpm4\" (UniqueName: \"kubernetes.io/projected/80ea793e-5208-4438-bedd-614d4e7c445e-kube-api-access-4tpm4\") pod \"coredns-66bc5c9577-82jz6\" (UID: \"80ea793e-5208-4438-bedd-614d4e7c445e\") " pod="kube-system/coredns-66bc5c9577-82jz6" Jan 23 17:57:46.071346 kubelet[3549]: I0123 17:57:46.071292 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pq8f\" (UniqueName: \"kubernetes.io/projected/23d9e418-87f2-43f8-9aaf-3ad1cb5fc659-kube-api-access-4pq8f\") pod \"coredns-66bc5c9577-48d8x\" (UID: \"23d9e418-87f2-43f8-9aaf-3ad1cb5fc659\") " pod="kube-system/coredns-66bc5c9577-48d8x" Jan 23 17:57:46.071553 kubelet[3549]: I0123 17:57:46.071500 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23d9e418-87f2-43f8-9aaf-3ad1cb5fc659-config-volume\") pod \"coredns-66bc5c9577-48d8x\" (UID: \"23d9e418-87f2-43f8-9aaf-3ad1cb5fc659\") " pod="kube-system/coredns-66bc5c9577-48d8x" Jan 23 17:57:46.071655 kubelet[3549]: I0123 17:57:46.071607 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80ea793e-5208-4438-bedd-614d4e7c445e-config-volume\") pod \"coredns-66bc5c9577-82jz6\" (UID: \"80ea793e-5208-4438-bedd-614d4e7c445e\") " pod="kube-system/coredns-66bc5c9577-82jz6" Jan 23 17:57:46.273909 containerd[2025]: time="2026-01-23T17:57:46.273725334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-48d8x,Uid:23d9e418-87f2-43f8-9aaf-3ad1cb5fc659,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:46.293272 containerd[2025]: time="2026-01-23T17:57:46.293218914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-82jz6,Uid:80ea793e-5208-4438-bedd-614d4e7c445e,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:46.581865 kubelet[3549]: I0123 17:57:46.581630 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-26kqd" podStartSLOduration=9.114478124 podStartE2EDuration="18.581606947s" podCreationTimestamp="2026-01-23 17:57:28 +0000 UTC" firstStartedPulling="2026-01-23 17:57:30.779507153 +0000 UTC m=+6.832650119" lastFinishedPulling="2026-01-23 17:57:40.246635916 +0000 UTC m=+16.299778942" observedRunningTime="2026-01-23 17:57:46.580570147 +0000 UTC m=+22.633713101" watchObservedRunningTime="2026-01-23 17:57:46.581606947 +0000 UTC m=+22.634749889" Jan 23 17:57:49.024217 (udev-worker)[4295]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:49.026246 (udev-worker)[4293]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:49.026757 systemd-networkd[1840]: cilium_host: Link UP Jan 23 17:57:49.027951 systemd-networkd[1840]: cilium_net: Link UP Jan 23 17:57:49.028336 systemd-networkd[1840]: cilium_net: Gained carrier Jan 23 17:57:49.028653 systemd-networkd[1840]: cilium_host: Gained carrier Jan 23 17:57:49.191688 (udev-worker)[4337]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:49.203656 systemd-networkd[1840]: cilium_vxlan: Link UP Jan 23 17:57:49.203679 systemd-networkd[1840]: cilium_vxlan: Gained carrier Jan 23 17:57:49.320054 systemd-networkd[1840]: cilium_net: Gained IPv6LL Jan 23 17:57:49.576099 systemd-networkd[1840]: cilium_host: Gained IPv6LL Jan 23 17:57:49.780954 kernel: NET: Registered PF_ALG protocol family Jan 23 17:57:50.857840 systemd-networkd[1840]: cilium_vxlan: Gained IPv6LL Jan 23 17:57:51.144139 systemd-networkd[1840]: lxc_health: Link UP Jan 23 17:57:51.161196 systemd-networkd[1840]: lxc_health: Gained carrier Jan 23 17:57:51.912898 systemd-networkd[1840]: lxca0d73f55a2d4: Link UP Jan 23 17:57:51.915966 kernel: eth0: renamed from tmp9b7c5 Jan 23 17:57:51.924511 systemd-networkd[1840]: lxca0d73f55a2d4: Gained carrier Jan 23 17:57:51.928644 systemd-networkd[1840]: lxcbd3c4277f542: Link UP Jan 23 17:57:51.939987 (udev-worker)[4338]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:51.945197 kernel: eth0: renamed from tmpa9579 Jan 23 17:57:51.950596 systemd-networkd[1840]: lxcbd3c4277f542: Gained carrier Jan 23 17:57:52.328983 systemd-networkd[1840]: lxc_health: Gained IPv6LL Jan 23 17:57:52.968241 systemd-networkd[1840]: lxca0d73f55a2d4: Gained IPv6LL Jan 23 17:57:53.161046 systemd-networkd[1840]: lxcbd3c4277f542: Gained IPv6LL Jan 23 17:57:53.692311 kubelet[3549]: I0123 17:57:53.692012 3549 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:57:56.006997 ntpd[2226]: Listen normally on 6 cilium_host 192.168.0.115:123 Jan 23 17:57:56.008550 ntpd[2226]: 23 Jan 17:57:56 ntpd[2226]: Listen normally on 6 cilium_host 192.168.0.115:123 Jan 23 17:57:56.008550 ntpd[2226]: 23 Jan 17:57:56 ntpd[2226]: Listen normally on 7 cilium_net [fe80::e82b:80ff:fef6:cb20%4]:123 Jan 23 17:57:56.008550 ntpd[2226]: 23 Jan 17:57:56 ntpd[2226]: Listen normally on 8 cilium_host [fe80::c1d:5aff:fec5:dda2%5]:123 Jan 23 17:57:56.008550 ntpd[2226]: 23 Jan 17:57:56 ntpd[2226]: Listen normally on 9 cilium_vxlan [fe80::b0e3:46ff:fe12:6a63%6]:123 Jan 23 17:57:56.008550 ntpd[2226]: 23 Jan 17:57:56 ntpd[2226]: Listen normally on 10 lxc_health [fe80::a0c8:23ff:fe12:e351%8]:123 Jan 23 17:57:56.008550 ntpd[2226]: 23 Jan 17:57:56 ntpd[2226]: Listen normally on 11 lxca0d73f55a2d4 [fe80::24fd:47ff:fe5d:d810%10]:123 Jan 23 17:57:56.008550 ntpd[2226]: 23 Jan 17:57:56 ntpd[2226]: Listen normally on 12 lxcbd3c4277f542 [fe80::a893:42ff:fe9c:cc79%12]:123 Jan 23 17:57:56.007079 ntpd[2226]: Listen normally on 7 cilium_net [fe80::e82b:80ff:fef6:cb20%4]:123 Jan 23 17:57:56.007126 ntpd[2226]: Listen normally on 8 cilium_host [fe80::c1d:5aff:fec5:dda2%5]:123 Jan 23 17:57:56.007170 ntpd[2226]: Listen normally on 9 cilium_vxlan [fe80::b0e3:46ff:fe12:6a63%6]:123 Jan 23 17:57:56.007213 ntpd[2226]: Listen normally on 10 lxc_health [fe80::a0c8:23ff:fe12:e351%8]:123 Jan 23 17:57:56.007256 ntpd[2226]: Listen normally on 11 lxca0d73f55a2d4 [fe80::24fd:47ff:fe5d:d810%10]:123 Jan 23 17:57:56.007299 ntpd[2226]: Listen normally on 12 lxcbd3c4277f542 [fe80::a893:42ff:fe9c:cc79%12]:123 Jan 23 17:58:00.374083 containerd[2025]: time="2026-01-23T17:58:00.373975160Z" level=info msg="connecting to shim a95791a43ddc0472d9ef982b5e669c9f6f5f3408b2e3577447836642cce60080" address="unix:///run/containerd/s/37655014cf37ee614d2a023fafd6ead12095d934c6cc6bc9aed3e9c9a5c8f9d4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:00.410331 containerd[2025]: time="2026-01-23T17:58:00.410083112Z" level=info msg="connecting to shim 9b7c537d945a760b181b3c4c6ec27769bd2a65f226d0c59ba894f50427de61d6" address="unix:///run/containerd/s/05152aea7a39d2406f8d88630567bbf86865f2075262a45761960f5e115c65f5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:00.464375 systemd[1]: Started cri-containerd-a95791a43ddc0472d9ef982b5e669c9f6f5f3408b2e3577447836642cce60080.scope - libcontainer container a95791a43ddc0472d9ef982b5e669c9f6f5f3408b2e3577447836642cce60080. Jan 23 17:58:00.516419 systemd[1]: Started cri-containerd-9b7c537d945a760b181b3c4c6ec27769bd2a65f226d0c59ba894f50427de61d6.scope - libcontainer container 9b7c537d945a760b181b3c4c6ec27769bd2a65f226d0c59ba894f50427de61d6. Jan 23 17:58:00.625776 containerd[2025]: time="2026-01-23T17:58:00.625451385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-82jz6,Uid:80ea793e-5208-4438-bedd-614d4e7c445e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a95791a43ddc0472d9ef982b5e669c9f6f5f3408b2e3577447836642cce60080\"" Jan 23 17:58:00.637906 containerd[2025]: time="2026-01-23T17:58:00.637764777Z" level=info msg="CreateContainer within sandbox \"a95791a43ddc0472d9ef982b5e669c9f6f5f3408b2e3577447836642cce60080\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:00.680746 containerd[2025]: time="2026-01-23T17:58:00.680668893Z" level=info msg="Container 82490ff1da52142875406ef2eb1b5200d0be21d23eb2f6dad572e0a341490c27: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:00.692774 containerd[2025]: time="2026-01-23T17:58:00.692697801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-48d8x,Uid:23d9e418-87f2-43f8-9aaf-3ad1cb5fc659,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b7c537d945a760b181b3c4c6ec27769bd2a65f226d0c59ba894f50427de61d6\"" Jan 23 17:58:00.700793 containerd[2025]: time="2026-01-23T17:58:00.700723941Z" level=info msg="CreateContainer within sandbox \"a95791a43ddc0472d9ef982b5e669c9f6f5f3408b2e3577447836642cce60080\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82490ff1da52142875406ef2eb1b5200d0be21d23eb2f6dad572e0a341490c27\"" Jan 23 17:58:00.702334 containerd[2025]: time="2026-01-23T17:58:00.702272277Z" level=info msg="StartContainer for \"82490ff1da52142875406ef2eb1b5200d0be21d23eb2f6dad572e0a341490c27\"" Jan 23 17:58:00.706344 containerd[2025]: time="2026-01-23T17:58:00.706010745Z" level=info msg="CreateContainer within sandbox \"9b7c537d945a760b181b3c4c6ec27769bd2a65f226d0c59ba894f50427de61d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:00.710301 containerd[2025]: time="2026-01-23T17:58:00.710179041Z" level=info msg="connecting to shim 82490ff1da52142875406ef2eb1b5200d0be21d23eb2f6dad572e0a341490c27" address="unix:///run/containerd/s/37655014cf37ee614d2a023fafd6ead12095d934c6cc6bc9aed3e9c9a5c8f9d4" protocol=ttrpc version=3 Jan 23 17:58:00.730241 containerd[2025]: time="2026-01-23T17:58:00.728092582Z" level=info msg="Container 52d1989dd3d0b0507a630020f9ec4f0a38e5a1b87cf641a2fc2c4215e9103f72: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:00.747927 containerd[2025]: time="2026-01-23T17:58:00.747820414Z" level=info msg="CreateContainer within sandbox \"9b7c537d945a760b181b3c4c6ec27769bd2a65f226d0c59ba894f50427de61d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52d1989dd3d0b0507a630020f9ec4f0a38e5a1b87cf641a2fc2c4215e9103f72\"" Jan 23 17:58:00.750660 containerd[2025]: time="2026-01-23T17:58:00.750593614Z" level=info msg="StartContainer for \"52d1989dd3d0b0507a630020f9ec4f0a38e5a1b87cf641a2fc2c4215e9103f72\"" Jan 23 17:58:00.751452 systemd[1]: Started cri-containerd-82490ff1da52142875406ef2eb1b5200d0be21d23eb2f6dad572e0a341490c27.scope - libcontainer container 82490ff1da52142875406ef2eb1b5200d0be21d23eb2f6dad572e0a341490c27. Jan 23 17:58:00.755544 containerd[2025]: time="2026-01-23T17:58:00.755441134Z" level=info msg="connecting to shim 52d1989dd3d0b0507a630020f9ec4f0a38e5a1b87cf641a2fc2c4215e9103f72" address="unix:///run/containerd/s/05152aea7a39d2406f8d88630567bbf86865f2075262a45761960f5e115c65f5" protocol=ttrpc version=3 Jan 23 17:58:00.796179 systemd[1]: Started cri-containerd-52d1989dd3d0b0507a630020f9ec4f0a38e5a1b87cf641a2fc2c4215e9103f72.scope - libcontainer container 52d1989dd3d0b0507a630020f9ec4f0a38e5a1b87cf641a2fc2c4215e9103f72. Jan 23 17:58:00.865907 containerd[2025]: time="2026-01-23T17:58:00.865696510Z" level=info msg="StartContainer for \"82490ff1da52142875406ef2eb1b5200d0be21d23eb2f6dad572e0a341490c27\" returns successfully" Jan 23 17:58:00.896963 containerd[2025]: time="2026-01-23T17:58:00.896209678Z" level=info msg="StartContainer for \"52d1989dd3d0b0507a630020f9ec4f0a38e5a1b87cf641a2fc2c4215e9103f72\" returns successfully" Jan 23 17:58:01.335210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082200121.mount: Deactivated successfully. Jan 23 17:58:01.629854 kubelet[3549]: I0123 17:58:01.629602 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-48d8x" podStartSLOduration=33.628972078 podStartE2EDuration="33.628972078s" podCreationTimestamp="2026-01-23 17:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:01.627386734 +0000 UTC m=+37.680529712" watchObservedRunningTime="2026-01-23 17:58:01.628972078 +0000 UTC m=+37.682115032" Jan 23 17:58:01.655030 kubelet[3549]: I0123 17:58:01.654868 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-82jz6" podStartSLOduration=33.654844318 podStartE2EDuration="33.654844318s" podCreationTimestamp="2026-01-23 17:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:01.652143022 +0000 UTC m=+37.705286048" watchObservedRunningTime="2026-01-23 17:58:01.654844318 +0000 UTC m=+37.707987272" Jan 23 17:58:09.064297 systemd[1]: Started sshd@7-172.31.24.80:22-68.220.241.50:35214.service - OpenSSH per-connection server daemon (68.220.241.50:35214). Jan 23 17:58:09.584469 sshd[4866]: Accepted publickey for core from 68.220.241.50 port 35214 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:09.586859 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:09.595145 systemd-logind[2001]: New session 8 of user core. Jan 23 17:58:09.602120 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:58:10.094661 sshd[4869]: Connection closed by 68.220.241.50 port 35214 Jan 23 17:58:10.095507 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:10.103369 systemd-logind[2001]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:58:10.103811 systemd[1]: sshd@7-172.31.24.80:22-68.220.241.50:35214.service: Deactivated successfully. Jan 23 17:58:10.108836 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:58:10.114938 systemd-logind[2001]: Removed session 8. Jan 23 17:58:15.202378 systemd[1]: Started sshd@8-172.31.24.80:22-68.220.241.50:59350.service - OpenSSH per-connection server daemon (68.220.241.50:59350). Jan 23 17:58:15.757517 sshd[4887]: Accepted publickey for core from 68.220.241.50 port 59350 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:15.759850 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:15.768979 systemd-logind[2001]: New session 9 of user core. Jan 23 17:58:15.773419 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:58:16.261166 sshd[4890]: Connection closed by 68.220.241.50 port 59350 Jan 23 17:58:16.262115 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:16.269354 systemd[1]: sshd@8-172.31.24.80:22-68.220.241.50:59350.service: Deactivated successfully. Jan 23 17:58:16.273100 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:58:16.276079 systemd-logind[2001]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:58:16.279162 systemd-logind[2001]: Removed session 9. Jan 23 17:58:21.347298 systemd[1]: Started sshd@9-172.31.24.80:22-68.220.241.50:59364.service - OpenSSH per-connection server daemon (68.220.241.50:59364). Jan 23 17:58:21.865161 sshd[4903]: Accepted publickey for core from 68.220.241.50 port 59364 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:21.866797 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:21.877143 systemd-logind[2001]: New session 10 of user core. Jan 23 17:58:21.882109 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:58:22.331016 sshd[4906]: Connection closed by 68.220.241.50 port 59364 Jan 23 17:58:22.331543 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:22.341400 systemd-logind[2001]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:58:22.341636 systemd[1]: sshd@9-172.31.24.80:22-68.220.241.50:59364.service: Deactivated successfully. Jan 23 17:58:22.348797 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:58:22.353032 systemd-logind[2001]: Removed session 10. Jan 23 17:58:27.435323 systemd[1]: Started sshd@10-172.31.24.80:22-68.220.241.50:47066.service - OpenSSH per-connection server daemon (68.220.241.50:47066). Jan 23 17:58:27.959080 sshd[4921]: Accepted publickey for core from 68.220.241.50 port 47066 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:27.961694 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:27.972838 systemd-logind[2001]: New session 11 of user core. Jan 23 17:58:27.979224 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:58:28.448391 sshd[4924]: Connection closed by 68.220.241.50 port 47066 Jan 23 17:58:28.449441 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:28.455224 systemd-logind[2001]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:58:28.455937 systemd[1]: sshd@10-172.31.24.80:22-68.220.241.50:47066.service: Deactivated successfully. Jan 23 17:58:28.460444 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:58:28.466956 systemd-logind[2001]: Removed session 11. Jan 23 17:58:28.542651 systemd[1]: Started sshd@11-172.31.24.80:22-68.220.241.50:47074.service - OpenSSH per-connection server daemon (68.220.241.50:47074). Jan 23 17:58:29.059608 sshd[4937]: Accepted publickey for core from 68.220.241.50 port 47074 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:29.062155 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:29.070475 systemd-logind[2001]: New session 12 of user core. Jan 23 17:58:29.087214 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:58:29.618088 sshd[4940]: Connection closed by 68.220.241.50 port 47074 Jan 23 17:58:29.619230 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:29.626658 systemd[1]: sshd@11-172.31.24.80:22-68.220.241.50:47074.service: Deactivated successfully. Jan 23 17:58:29.627393 systemd-logind[2001]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:58:29.633590 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:58:29.637620 systemd-logind[2001]: Removed session 12. Jan 23 17:58:29.715359 systemd[1]: Started sshd@12-172.31.24.80:22-68.220.241.50:47084.service - OpenSSH per-connection server daemon (68.220.241.50:47084). Jan 23 17:58:30.242748 sshd[4950]: Accepted publickey for core from 68.220.241.50 port 47084 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:30.244306 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:30.253487 systemd-logind[2001]: New session 13 of user core. Jan 23 17:58:30.261166 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:58:30.719512 sshd[4953]: Connection closed by 68.220.241.50 port 47084 Jan 23 17:58:30.718671 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:30.724569 systemd[1]: sshd@12-172.31.24.80:22-68.220.241.50:47084.service: Deactivated successfully. Jan 23 17:58:30.729306 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:58:30.732400 systemd-logind[2001]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:58:30.736219 systemd-logind[2001]: Removed session 13. Jan 23 17:58:35.823464 systemd[1]: Started sshd@13-172.31.24.80:22-68.220.241.50:50622.service - OpenSSH per-connection server daemon (68.220.241.50:50622). Jan 23 17:58:36.375710 sshd[4971]: Accepted publickey for core from 68.220.241.50 port 50622 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:36.377994 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:36.385754 systemd-logind[2001]: New session 14 of user core. Jan 23 17:58:36.403235 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:58:36.880917 sshd[4974]: Connection closed by 68.220.241.50 port 50622 Jan 23 17:58:36.879951 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:36.886820 systemd[1]: sshd@13-172.31.24.80:22-68.220.241.50:50622.service: Deactivated successfully. Jan 23 17:58:36.892154 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:58:36.893854 systemd-logind[2001]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:58:36.897175 systemd-logind[2001]: Removed session 14. Jan 23 17:58:41.971527 systemd[1]: Started sshd@14-172.31.24.80:22-68.220.241.50:50626.service - OpenSSH per-connection server daemon (68.220.241.50:50626). Jan 23 17:58:42.490200 sshd[4986]: Accepted publickey for core from 68.220.241.50 port 50626 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:42.492556 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:42.501322 systemd-logind[2001]: New session 15 of user core. Jan 23 17:58:42.517187 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:58:42.964023 sshd[4989]: Connection closed by 68.220.241.50 port 50626 Jan 23 17:58:42.964497 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:42.974781 systemd[1]: sshd@14-172.31.24.80:22-68.220.241.50:50626.service: Deactivated successfully. Jan 23 17:58:42.981528 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:58:42.984080 systemd-logind[2001]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:58:42.988944 systemd-logind[2001]: Removed session 15. Jan 23 17:58:48.056387 systemd[1]: Started sshd@15-172.31.24.80:22-68.220.241.50:50152.service - OpenSSH per-connection server daemon (68.220.241.50:50152). Jan 23 17:58:48.576724 sshd[5001]: Accepted publickey for core from 68.220.241.50 port 50152 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:48.579158 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:48.588237 systemd-logind[2001]: New session 16 of user core. Jan 23 17:58:48.598177 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:58:49.054907 sshd[5004]: Connection closed by 68.220.241.50 port 50152 Jan 23 17:58:49.054745 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:49.063643 systemd[1]: sshd@15-172.31.24.80:22-68.220.241.50:50152.service: Deactivated successfully. Jan 23 17:58:49.070140 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:58:49.073841 systemd-logind[2001]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:58:49.076989 systemd-logind[2001]: Removed session 16. Jan 23 17:58:49.151715 systemd[1]: Started sshd@16-172.31.24.80:22-68.220.241.50:50162.service - OpenSSH per-connection server daemon (68.220.241.50:50162). Jan 23 17:58:49.670376 sshd[5016]: Accepted publickey for core from 68.220.241.50 port 50162 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:49.672799 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:49.681288 systemd-logind[2001]: New session 17 of user core. Jan 23 17:58:49.690121 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:58:50.227966 sshd[5019]: Connection closed by 68.220.241.50 port 50162 Jan 23 17:58:50.229094 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:50.236029 systemd[1]: sshd@16-172.31.24.80:22-68.220.241.50:50162.service: Deactivated successfully. Jan 23 17:58:50.241380 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:58:50.246198 systemd-logind[2001]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:58:50.248675 systemd-logind[2001]: Removed session 17. Jan 23 17:58:50.326493 systemd[1]: Started sshd@17-172.31.24.80:22-68.220.241.50:50168.service - OpenSSH per-connection server daemon (68.220.241.50:50168). Jan 23 17:58:50.851169 sshd[5029]: Accepted publickey for core from 68.220.241.50 port 50168 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:50.854005 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:50.862152 systemd-logind[2001]: New session 18 of user core. Jan 23 17:58:50.874154 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:58:51.990459 sshd[5032]: Connection closed by 68.220.241.50 port 50168 Jan 23 17:58:51.990943 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:51.998234 systemd-logind[2001]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:58:51.998496 systemd[1]: sshd@17-172.31.24.80:22-68.220.241.50:50168.service: Deactivated successfully. Jan 23 17:58:52.001914 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:58:52.009600 systemd-logind[2001]: Removed session 18. Jan 23 17:58:52.087389 systemd[1]: Started sshd@18-172.31.24.80:22-68.220.241.50:50174.service - OpenSSH per-connection server daemon (68.220.241.50:50174). Jan 23 17:58:52.629909 sshd[5047]: Accepted publickey for core from 68.220.241.50 port 50174 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:52.632254 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:52.640307 systemd-logind[2001]: New session 19 of user core. Jan 23 17:58:52.648201 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:58:53.368164 sshd[5050]: Connection closed by 68.220.241.50 port 50174 Jan 23 17:58:53.369164 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:53.375771 systemd[1]: sshd@18-172.31.24.80:22-68.220.241.50:50174.service: Deactivated successfully. Jan 23 17:58:53.381551 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:58:53.384600 systemd-logind[2001]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:58:53.388241 systemd-logind[2001]: Removed session 19. Jan 23 17:58:53.463834 systemd[1]: Started sshd@19-172.31.24.80:22-68.220.241.50:55074.service - OpenSSH per-connection server daemon (68.220.241.50:55074). Jan 23 17:58:53.995426 sshd[5062]: Accepted publickey for core from 68.220.241.50 port 55074 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:53.998028 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:54.006323 systemd-logind[2001]: New session 20 of user core. Jan 23 17:58:54.012154 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 17:58:54.472280 sshd[5065]: Connection closed by 68.220.241.50 port 55074 Jan 23 17:58:54.473324 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:54.484841 systemd[1]: sshd@19-172.31.24.80:22-68.220.241.50:55074.service: Deactivated successfully. Jan 23 17:58:54.493559 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 17:58:54.499725 systemd-logind[2001]: Session 20 logged out. Waiting for processes to exit. Jan 23 17:58:54.505484 systemd-logind[2001]: Removed session 20. Jan 23 17:58:59.563356 systemd[1]: Started sshd@20-172.31.24.80:22-68.220.241.50:55086.service - OpenSSH per-connection server daemon (68.220.241.50:55086). Jan 23 17:59:00.082941 sshd[5079]: Accepted publickey for core from 68.220.241.50 port 55086 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:00.085126 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:00.092670 systemd-logind[2001]: New session 21 of user core. Jan 23 17:59:00.104182 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 17:59:00.552166 sshd[5082]: Connection closed by 68.220.241.50 port 55086 Jan 23 17:59:00.554169 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:00.561528 systemd[1]: sshd@20-172.31.24.80:22-68.220.241.50:55086.service: Deactivated successfully. Jan 23 17:59:00.566159 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 17:59:00.568022 systemd-logind[2001]: Session 21 logged out. Waiting for processes to exit. Jan 23 17:59:00.571727 systemd-logind[2001]: Removed session 21. Jan 23 17:59:05.647640 systemd[1]: Started sshd@21-172.31.24.80:22-68.220.241.50:47500.service - OpenSSH per-connection server daemon (68.220.241.50:47500). Jan 23 17:59:06.166070 sshd[5098]: Accepted publickey for core from 68.220.241.50 port 47500 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:06.168955 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:06.178549 systemd-logind[2001]: New session 22 of user core. Jan 23 17:59:06.184157 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 17:59:06.650973 sshd[5101]: Connection closed by 68.220.241.50 port 47500 Jan 23 17:59:06.649300 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:06.659374 systemd[1]: sshd@21-172.31.24.80:22-68.220.241.50:47500.service: Deactivated successfully. Jan 23 17:59:06.665552 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 17:59:06.669034 systemd-logind[2001]: Session 22 logged out. Waiting for processes to exit. Jan 23 17:59:06.672450 systemd-logind[2001]: Removed session 22. Jan 23 17:59:06.739666 systemd[1]: Started sshd@22-172.31.24.80:22-68.220.241.50:47516.service - OpenSSH per-connection server daemon (68.220.241.50:47516). Jan 23 17:59:07.258151 sshd[5112]: Accepted publickey for core from 68.220.241.50 port 47516 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:07.260468 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:07.269972 systemd-logind[2001]: New session 23 of user core. Jan 23 17:59:07.277153 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 17:59:09.772282 containerd[2025]: time="2026-01-23T17:59:09.772174084Z" level=info msg="StopContainer for \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" with timeout 30 (s)" Jan 23 17:59:09.773693 containerd[2025]: time="2026-01-23T17:59:09.773351764Z" level=info msg="Stop container \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" with signal terminated" Jan 23 17:59:09.833754 systemd[1]: cri-containerd-c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3.scope: Deactivated successfully. Jan 23 17:59:09.839369 containerd[2025]: time="2026-01-23T17:59:09.839127005Z" level=info msg="received container exit event container_id:\"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" id:\"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" pid:4088 exited_at:{seconds:1769191149 nanos:837203501}" Jan 23 17:59:09.884934 containerd[2025]: time="2026-01-23T17:59:09.884702381Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:59:09.919468 containerd[2025]: time="2026-01-23T17:59:09.919399325Z" level=info msg="StopContainer for \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" with timeout 2 (s)" Jan 23 17:59:09.919459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3-rootfs.mount: Deactivated successfully. Jan 23 17:59:09.922480 containerd[2025]: time="2026-01-23T17:59:09.922298513Z" level=info msg="Stop container \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" with signal terminated" Jan 23 17:59:09.940212 systemd-networkd[1840]: lxc_health: Link DOWN Jan 23 17:59:09.940232 systemd-networkd[1840]: lxc_health: Lost carrier Jan 23 17:59:09.955558 containerd[2025]: time="2026-01-23T17:59:09.955203173Z" level=info msg="StopContainer for \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" returns successfully" Jan 23 17:59:09.957856 containerd[2025]: time="2026-01-23T17:59:09.957808577Z" level=info msg="StopPodSandbox for \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\"" Jan 23 17:59:09.958218 containerd[2025]: time="2026-01-23T17:59:09.958181825Z" level=info msg="Container to stop \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:09.967859 systemd[1]: cri-containerd-65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638.scope: Deactivated successfully. Jan 23 17:59:09.970057 systemd[1]: cri-containerd-65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638.scope: Consumed 14.401s CPU time, 124.7M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 17:59:09.973288 containerd[2025]: time="2026-01-23T17:59:09.972868205Z" level=info msg="received container exit event container_id:\"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" id:\"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" pid:4200 exited_at:{seconds:1769191149 nanos:969814997}" Jan 23 17:59:09.993252 systemd[1]: cri-containerd-07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762.scope: Deactivated successfully. Jan 23 17:59:10.004149 containerd[2025]: time="2026-01-23T17:59:10.003989594Z" level=info msg="received sandbox exit event container_id:\"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" id:\"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" exit_status:137 exited_at:{seconds:1769191150 nanos:3444878}" monitor_name=podsandbox Jan 23 17:59:10.039367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638-rootfs.mount: Deactivated successfully. Jan 23 17:59:10.062195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762-rootfs.mount: Deactivated successfully. Jan 23 17:59:10.065564 containerd[2025]: time="2026-01-23T17:59:10.065090894Z" level=info msg="shim disconnected" id=07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762 namespace=k8s.io Jan 23 17:59:10.065564 containerd[2025]: time="2026-01-23T17:59:10.065143742Z" level=warning msg="cleaning up after shim disconnected" id=07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762 namespace=k8s.io Jan 23 17:59:10.065564 containerd[2025]: time="2026-01-23T17:59:10.065189942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 17:59:10.069717 containerd[2025]: time="2026-01-23T17:59:10.069633542Z" level=info msg="StopContainer for \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" returns successfully" Jan 23 17:59:10.070370 containerd[2025]: time="2026-01-23T17:59:10.070293626Z" level=info msg="StopPodSandbox for \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\"" Jan 23 17:59:10.070473 containerd[2025]: time="2026-01-23T17:59:10.070448078Z" level=info msg="Container to stop \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:10.070532 containerd[2025]: time="2026-01-23T17:59:10.070475942Z" level=info msg="Container to stop \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:10.070586 containerd[2025]: time="2026-01-23T17:59:10.070521914Z" level=info msg="Container to stop \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:10.070586 containerd[2025]: time="2026-01-23T17:59:10.070548134Z" level=info msg="Container to stop \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:10.070696 containerd[2025]: time="2026-01-23T17:59:10.070572458Z" level=info msg="Container to stop \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:10.085786 systemd[1]: cri-containerd-d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9.scope: Deactivated successfully. Jan 23 17:59:10.094187 containerd[2025]: time="2026-01-23T17:59:10.094052726Z" level=info msg="received sandbox exit event container_id:\"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" id:\"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" exit_status:137 exited_at:{seconds:1769191150 nanos:91216106}" monitor_name=podsandbox Jan 23 17:59:10.105967 containerd[2025]: time="2026-01-23T17:59:10.105103958Z" level=info msg="TearDown network for sandbox \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" successfully" Jan 23 17:59:10.105967 containerd[2025]: time="2026-01-23T17:59:10.105155006Z" level=info msg="StopPodSandbox for \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" returns successfully" Jan 23 17:59:10.110103 containerd[2025]: time="2026-01-23T17:59:10.108786338Z" level=info msg="received sandbox container exit event sandbox_id:\"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" exit_status:137 exited_at:{seconds:1769191150 nanos:3444878}" monitor_name=criService Jan 23 17:59:10.109686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762-shm.mount: Deactivated successfully. Jan 23 17:59:10.157438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9-rootfs.mount: Deactivated successfully. Jan 23 17:59:10.167193 containerd[2025]: time="2026-01-23T17:59:10.167119634Z" level=info msg="shim disconnected" id=d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9 namespace=k8s.io Jan 23 17:59:10.167432 containerd[2025]: time="2026-01-23T17:59:10.167188346Z" level=warning msg="cleaning up after shim disconnected" id=d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9 namespace=k8s.io Jan 23 17:59:10.167432 containerd[2025]: time="2026-01-23T17:59:10.167238518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 17:59:10.188191 containerd[2025]: time="2026-01-23T17:59:10.188044815Z" level=info msg="received sandbox container exit event sandbox_id:\"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" exit_status:137 exited_at:{seconds:1769191150 nanos:91216106}" monitor_name=criService Jan 23 17:59:10.188191 containerd[2025]: time="2026-01-23T17:59:10.188127999Z" level=info msg="TearDown network for sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" successfully" Jan 23 17:59:10.188191 containerd[2025]: time="2026-01-23T17:59:10.188155383Z" level=info msg="StopPodSandbox for \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" returns successfully" Jan 23 17:59:10.281701 kubelet[3549]: I0123 17:59:10.281394 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrcmh\" (UniqueName: \"kubernetes.io/projected/15a17f4e-2b13-4157-9e34-4b3b31367d03-kube-api-access-nrcmh\") pod \"15a17f4e-2b13-4157-9e34-4b3b31367d03\" (UID: \"15a17f4e-2b13-4157-9e34-4b3b31367d03\") " Jan 23 17:59:10.281701 kubelet[3549]: I0123 17:59:10.281460 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15a17f4e-2b13-4157-9e34-4b3b31367d03-cilium-config-path\") pod \"15a17f4e-2b13-4157-9e34-4b3b31367d03\" (UID: \"15a17f4e-2b13-4157-9e34-4b3b31367d03\") " Jan 23 17:59:10.294516 kubelet[3549]: I0123 17:59:10.292720 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a17f4e-2b13-4157-9e34-4b3b31367d03-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15a17f4e-2b13-4157-9e34-4b3b31367d03" (UID: "15a17f4e-2b13-4157-9e34-4b3b31367d03"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:10.295888 kubelet[3549]: I0123 17:59:10.295749 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a17f4e-2b13-4157-9e34-4b3b31367d03-kube-api-access-nrcmh" (OuterVolumeSpecName: "kube-api-access-nrcmh") pod "15a17f4e-2b13-4157-9e34-4b3b31367d03" (UID: "15a17f4e-2b13-4157-9e34-4b3b31367d03"). InnerVolumeSpecName "kube-api-access-nrcmh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:10.335501 systemd[1]: Removed slice kubepods-besteffort-pod15a17f4e_2b13_4157_9e34_4b3b31367d03.slice - libcontainer container kubepods-besteffort-pod15a17f4e_2b13_4157_9e34_4b3b31367d03.slice. Jan 23 17:59:10.382588 kubelet[3549]: I0123 17:59:10.382515 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-run\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.382588 kubelet[3549]: I0123 17:59:10.382588 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-lib-modules\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.382810 kubelet[3549]: I0123 17:59:10.382625 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-xtables-lock\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.382810 kubelet[3549]: I0123 17:59:10.382664 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-cgroup\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.382810 kubelet[3549]: I0123 17:59:10.382695 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-etc-cni-netd\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.382810 kubelet[3549]: I0123 17:59:10.382734 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpwpg\" (UniqueName: \"kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-kube-api-access-hpwpg\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.382810 kubelet[3549]: I0123 17:59:10.382771 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-net\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.382810 kubelet[3549]: I0123 17:59:10.382806 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-bpf-maps\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.383149 kubelet[3549]: I0123 17:59:10.382844 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-config-path\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.383149 kubelet[3549]: I0123 17:59:10.382905 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hostproc\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.383149 kubelet[3549]: I0123 17:59:10.382941 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-kernel\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.383149 kubelet[3549]: I0123 17:59:10.382980 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-clustermesh-secrets\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.383149 kubelet[3549]: I0123 17:59:10.383013 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cni-path\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.383149 kubelet[3549]: I0123 17:59:10.383049 3549 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hubble-tls\") pod \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\" (UID: \"d2f43bdd-6c9a-4f6e-952a-1f83a91833e4\") " Jan 23 17:59:10.383437 kubelet[3549]: I0123 17:59:10.383113 3549 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrcmh\" (UniqueName: \"kubernetes.io/projected/15a17f4e-2b13-4157-9e34-4b3b31367d03-kube-api-access-nrcmh\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.383437 kubelet[3549]: I0123 17:59:10.383136 3549 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15a17f4e-2b13-4157-9e34-4b3b31367d03-cilium-config-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.384917 kubelet[3549]: I0123 17:59:10.383710 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.384917 kubelet[3549]: I0123 17:59:10.383792 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.384917 kubelet[3549]: I0123 17:59:10.383830 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.384917 kubelet[3549]: I0123 17:59:10.383867 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.384917 kubelet[3549]: I0123 17:59:10.383953 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.385270 kubelet[3549]: I0123 17:59:10.383991 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.386339 kubelet[3549]: I0123 17:59:10.386251 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.386577 kubelet[3549]: I0123 17:59:10.386543 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.386726 kubelet[3549]: I0123 17:59:10.386697 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.387155 kubelet[3549]: I0123 17:59:10.387077 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:10.391274 kubelet[3549]: I0123 17:59:10.391205 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:10.393277 kubelet[3549]: I0123 17:59:10.393205 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-kube-api-access-hpwpg" (OuterVolumeSpecName: "kube-api-access-hpwpg") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "kube-api-access-hpwpg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:10.394153 kubelet[3549]: I0123 17:59:10.394110 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:59:10.396272 kubelet[3549]: I0123 17:59:10.396185 3549 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" (UID: "d2f43bdd-6c9a-4f6e-952a-1f83a91833e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:10.484434 kubelet[3549]: I0123 17:59:10.484281 3549 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-lib-modules\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484434 kubelet[3549]: I0123 17:59:10.484339 3549 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-xtables-lock\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484434 kubelet[3549]: I0123 17:59:10.484361 3549 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-cgroup\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484434 kubelet[3549]: I0123 17:59:10.484383 3549 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-etc-cni-netd\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484434 kubelet[3549]: I0123 17:59:10.484408 3549 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpwpg\" (UniqueName: \"kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-kube-api-access-hpwpg\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484458 3549 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-net\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484483 3549 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-bpf-maps\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484504 3549 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-config-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484524 3549 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hostproc\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484542 3549 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-host-proc-sys-kernel\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484562 3549 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-clustermesh-secrets\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484581 3549 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cni-path\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.484811 kubelet[3549]: I0123 17:59:10.484600 3549 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-hubble-tls\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.485262 kubelet[3549]: I0123 17:59:10.484619 3549 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4-cilium-run\") on node \"ip-172-31-24-80\" DevicePath \"\"" Jan 23 17:59:10.820910 kubelet[3549]: I0123 17:59:10.820251 3549 scope.go:117] "RemoveContainer" containerID="c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3" Jan 23 17:59:10.829980 containerd[2025]: time="2026-01-23T17:59:10.829917450Z" level=info msg="RemoveContainer for \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\"" Jan 23 17:59:10.845750 containerd[2025]: time="2026-01-23T17:59:10.845602626Z" level=info msg="RemoveContainer for \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" returns successfully" Jan 23 17:59:10.846434 kubelet[3549]: I0123 17:59:10.846250 3549 scope.go:117] "RemoveContainer" containerID="c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3" Jan 23 17:59:10.849650 containerd[2025]: time="2026-01-23T17:59:10.848963430Z" level=error msg="ContainerStatus for \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\": not found" Jan 23 17:59:10.850422 kubelet[3549]: E0123 17:59:10.850383 3549 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\": not found" containerID="c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3" Jan 23 17:59:10.850736 kubelet[3549]: I0123 17:59:10.850656 3549 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3"} err="failed to get container status \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c09d2f9e76894b28943d55ed2c5ab9f7ec705d83d2b3e95b2ba46a15e73f36b3\": not found" Jan 23 17:59:10.850906 kubelet[3549]: I0123 17:59:10.850854 3549 scope.go:117] "RemoveContainer" containerID="65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638" Jan 23 17:59:10.860405 containerd[2025]: time="2026-01-23T17:59:10.860149914Z" level=info msg="RemoveContainer for \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\"" Jan 23 17:59:10.865921 systemd[1]: Removed slice kubepods-burstable-podd2f43bdd_6c9a_4f6e_952a_1f83a91833e4.slice - libcontainer container kubepods-burstable-podd2f43bdd_6c9a_4f6e_952a_1f83a91833e4.slice. Jan 23 17:59:10.866168 systemd[1]: kubepods-burstable-podd2f43bdd_6c9a_4f6e_952a_1f83a91833e4.slice: Consumed 14.621s CPU time, 125.1M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 17:59:10.873352 containerd[2025]: time="2026-01-23T17:59:10.873279906Z" level=info msg="RemoveContainer for \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" returns successfully" Jan 23 17:59:10.874068 kubelet[3549]: I0123 17:59:10.873710 3549 scope.go:117] "RemoveContainer" containerID="0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888" Jan 23 17:59:10.883178 containerd[2025]: time="2026-01-23T17:59:10.882753978Z" level=info msg="RemoveContainer for \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\"" Jan 23 17:59:10.895553 containerd[2025]: time="2026-01-23T17:59:10.895366578Z" level=info msg="RemoveContainer for \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\" returns successfully" Jan 23 17:59:10.896089 kubelet[3549]: I0123 17:59:10.895975 3549 scope.go:117] "RemoveContainer" containerID="bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a" Jan 23 17:59:10.904089 containerd[2025]: time="2026-01-23T17:59:10.903907338Z" level=info msg="RemoveContainer for \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\"" Jan 23 17:59:10.911254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9-shm.mount: Deactivated successfully. Jan 23 17:59:10.911498 systemd[1]: var-lib-kubelet-pods-15a17f4e\x2d2b13\x2d4157\x2d9e34\x2d4b3b31367d03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrcmh.mount: Deactivated successfully. Jan 23 17:59:10.911706 systemd[1]: var-lib-kubelet-pods-d2f43bdd\x2d6c9a\x2d4f6e\x2d952a\x2d1f83a91833e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpwpg.mount: Deactivated successfully. Jan 23 17:59:10.911948 systemd[1]: var-lib-kubelet-pods-d2f43bdd\x2d6c9a\x2d4f6e\x2d952a\x2d1f83a91833e4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 17:59:10.912174 systemd[1]: var-lib-kubelet-pods-d2f43bdd\x2d6c9a\x2d4f6e\x2d952a\x2d1f83a91833e4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 17:59:10.915866 containerd[2025]: time="2026-01-23T17:59:10.915813834Z" level=info msg="RemoveContainer for \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\" returns successfully" Jan 23 17:59:10.916953 kubelet[3549]: I0123 17:59:10.916484 3549 scope.go:117] "RemoveContainer" containerID="29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643" Jan 23 17:59:10.921963 containerd[2025]: time="2026-01-23T17:59:10.921118386Z" level=info msg="RemoveContainer for \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\"" Jan 23 17:59:10.928046 containerd[2025]: time="2026-01-23T17:59:10.927994806Z" level=info msg="RemoveContainer for \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\" returns successfully" Jan 23 17:59:10.928540 kubelet[3549]: I0123 17:59:10.928511 3549 scope.go:117] "RemoveContainer" containerID="cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5" Jan 23 17:59:10.932367 containerd[2025]: time="2026-01-23T17:59:10.932323830Z" level=info msg="RemoveContainer for \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\"" Jan 23 17:59:10.938922 containerd[2025]: time="2026-01-23T17:59:10.938853114Z" level=info msg="RemoveContainer for \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\" returns successfully" Jan 23 17:59:10.939466 kubelet[3549]: I0123 17:59:10.939437 3549 scope.go:117] "RemoveContainer" containerID="65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638" Jan 23 17:59:10.940282 containerd[2025]: time="2026-01-23T17:59:10.940208562Z" level=error msg="ContainerStatus for \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\": not found" Jan 23 17:59:10.940574 kubelet[3549]: E0123 17:59:10.940528 3549 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\": not found" containerID="65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638" Jan 23 17:59:10.940660 kubelet[3549]: I0123 17:59:10.940587 3549 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638"} err="failed to get container status \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\": rpc error: code = NotFound desc = an error occurred when try to find container \"65fbb457d36f61946fd2665f7c489211d5d9ae38a9be68e46fb1f36e16720638\": not found" Jan 23 17:59:10.940660 kubelet[3549]: I0123 17:59:10.940621 3549 scope.go:117] "RemoveContainer" containerID="0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888" Jan 23 17:59:10.941271 containerd[2025]: time="2026-01-23T17:59:10.941210442Z" level=error msg="ContainerStatus for \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\": not found" Jan 23 17:59:10.942567 kubelet[3549]: E0123 17:59:10.942241 3549 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\": not found" containerID="0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888" Jan 23 17:59:10.942567 kubelet[3549]: I0123 17:59:10.942324 3549 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888"} err="failed to get container status \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\": rpc error: code = NotFound desc = an error occurred when try to find container \"0909a7cb61e5b526d43a2afce2aa79fbd02216acccedd471bd1dd295fb665888\": not found" Jan 23 17:59:10.942567 kubelet[3549]: I0123 17:59:10.942358 3549 scope.go:117] "RemoveContainer" containerID="bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a" Jan 23 17:59:10.963709 containerd[2025]: time="2026-01-23T17:59:10.962832078Z" level=error msg="ContainerStatus for \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\": not found" Jan 23 17:59:10.963920 kubelet[3549]: E0123 17:59:10.963368 3549 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\": not found" containerID="bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a" Jan 23 17:59:10.963920 kubelet[3549]: I0123 17:59:10.963442 3549 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a"} err="failed to get container status \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf71a3954b961eb6c3a28b316f04f12ac066d710bf71b04d26a47af3fce29c1a\": not found" Jan 23 17:59:10.963920 kubelet[3549]: I0123 17:59:10.963477 3549 scope.go:117] "RemoveContainer" containerID="29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643" Jan 23 17:59:10.964595 containerd[2025]: time="2026-01-23T17:59:10.964508574Z" level=error msg="ContainerStatus for \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\": not found" Jan 23 17:59:10.965243 kubelet[3549]: E0123 17:59:10.964998 3549 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\": not found" containerID="29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643" Jan 23 17:59:10.965243 kubelet[3549]: I0123 17:59:10.965047 3549 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643"} err="failed to get container status \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\": rpc error: code = NotFound desc = an error occurred when try to find container \"29c7b36c81f90dd167717fc01eb8bb5c3dec3698948aee7b2c1f8b09bf73f643\": not found" Jan 23 17:59:10.965243 kubelet[3549]: I0123 17:59:10.965096 3549 scope.go:117] "RemoveContainer" containerID="cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5" Jan 23 17:59:10.966481 containerd[2025]: time="2026-01-23T17:59:10.966291354Z" level=error msg="ContainerStatus for \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\": not found" Jan 23 17:59:10.967411 kubelet[3549]: E0123 17:59:10.967349 3549 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\": not found" containerID="cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5" Jan 23 17:59:10.967500 kubelet[3549]: I0123 17:59:10.967408 3549 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5"} err="failed to get container status \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc51712bcb0ce28331c932693e1319114c035e0f81be681f4449e3769ba88bf5\": not found" Jan 23 17:59:11.769356 sshd[5115]: Connection closed by 68.220.241.50 port 47516 Jan 23 17:59:11.768481 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:11.777125 systemd[1]: sshd@22-172.31.24.80:22-68.220.241.50:47516.service: Deactivated successfully. Jan 23 17:59:11.781449 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 17:59:11.782003 systemd[1]: session-23.scope: Consumed 1.577s CPU time, 23.7M memory peak. Jan 23 17:59:11.783525 systemd-logind[2001]: Session 23 logged out. Waiting for processes to exit. Jan 23 17:59:11.787283 systemd-logind[2001]: Removed session 23. Jan 23 17:59:11.860315 systemd[1]: Started sshd@23-172.31.24.80:22-68.220.241.50:47532.service - OpenSSH per-connection server daemon (68.220.241.50:47532). Jan 23 17:59:12.006954 ntpd[2226]: Deleting 10 lxc_health, [fe80::a0c8:23ff:fe12:e351%8]:123, stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 23 17:59:12.007452 ntpd[2226]: 23 Jan 17:59:12 ntpd[2226]: Deleting 10 lxc_health, [fe80::a0c8:23ff:fe12:e351%8]:123, stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 23 17:59:12.316770 kubelet[3549]: I0123 17:59:12.316719 3549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a17f4e-2b13-4157-9e34-4b3b31367d03" path="/var/lib/kubelet/pods/15a17f4e-2b13-4157-9e34-4b3b31367d03/volumes" Jan 23 17:59:12.317754 kubelet[3549]: I0123 17:59:12.317706 3549 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f43bdd-6c9a-4f6e-952a-1f83a91833e4" path="/var/lib/kubelet/pods/d2f43bdd-6c9a-4f6e-952a-1f83a91833e4/volumes" Jan 23 17:59:12.386183 sshd[5258]: Accepted publickey for core from 68.220.241.50 port 47532 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:12.388540 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:12.396977 systemd-logind[2001]: New session 24 of user core. Jan 23 17:59:12.406147 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 17:59:13.824998 sshd[5261]: Connection closed by 68.220.241.50 port 47532 Jan 23 17:59:13.827094 systemd[1]: Created slice kubepods-burstable-pod70ddaa20_c13c_4013_a964_faa4fdbbdb8d.slice - libcontainer container kubepods-burstable-pod70ddaa20_c13c_4013_a964_faa4fdbbdb8d.slice. Jan 23 17:59:13.828164 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:13.845198 systemd[1]: sshd@23-172.31.24.80:22-68.220.241.50:47532.service: Deactivated successfully. Jan 23 17:59:13.853147 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 17:59:13.861355 systemd-logind[2001]: Session 24 logged out. Waiting for processes to exit. Jan 23 17:59:13.869333 systemd-logind[2001]: Removed session 24. Jan 23 17:59:13.934738 systemd[1]: Started sshd@24-172.31.24.80:22-68.220.241.50:40672.service - OpenSSH per-connection server daemon (68.220.241.50:40672). Jan 23 17:59:14.005899 kubelet[3549]: I0123 17:59:14.005796 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-cilium-cgroup\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.007017 kubelet[3549]: I0123 17:59:14.005859 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-etc-cni-netd\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.007017 kubelet[3549]: I0123 17:59:14.006661 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-host-proc-sys-net\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.007668 kubelet[3549]: I0123 17:59:14.007241 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-host-proc-sys-kernel\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.007668 kubelet[3549]: I0123 17:59:14.007353 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-cni-path\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.007668 kubelet[3549]: I0123 17:59:14.007542 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-lib-modules\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.008020 kubelet[3549]: I0123 17:59:14.007862 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-clustermesh-secrets\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.008201 kubelet[3549]: I0123 17:59:14.007965 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-hostproc\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.008370 kubelet[3549]: I0123 17:59:14.008284 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-cilium-config-path\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.008533 kubelet[3549]: I0123 17:59:14.008474 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxfh\" (UniqueName: \"kubernetes.io/projected/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-kube-api-access-nlxfh\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.008636 kubelet[3549]: I0123 17:59:14.008614 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-cilium-ipsec-secrets\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.008819 kubelet[3549]: I0123 17:59:14.008795 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-hubble-tls\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.009203 kubelet[3549]: I0123 17:59:14.008915 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-xtables-lock\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.009203 kubelet[3549]: I0123 17:59:14.008951 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-bpf-maps\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.009203 kubelet[3549]: I0123 17:59:14.008988 3549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70ddaa20-c13c-4013-a964-faa4fdbbdb8d-cilium-run\") pod \"cilium-2sslj\" (UID: \"70ddaa20-c13c-4013-a964-faa4fdbbdb8d\") " pod="kube-system/cilium-2sslj" Jan 23 17:59:14.453153 containerd[2025]: time="2026-01-23T17:59:14.453041756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2sslj,Uid:70ddaa20-c13c-4013-a964-faa4fdbbdb8d,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:14.489363 containerd[2025]: time="2026-01-23T17:59:14.489284456Z" level=info msg="connecting to shim 0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0" address="unix:///run/containerd/s/cf289da797bcc27318e7ddd2b0c59a3c3b139f0b2505db1adb76384d0bf493bb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:14.501009 kubelet[3549]: E0123 17:59:14.500944 3549 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 17:59:14.510489 sshd[5271]: Accepted publickey for core from 68.220.241.50 port 40672 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:14.516184 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:14.530692 systemd-logind[2001]: New session 25 of user core. Jan 23 17:59:14.537174 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 17:59:14.551208 systemd[1]: Started cri-containerd-0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0.scope - libcontainer container 0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0. Jan 23 17:59:14.602626 containerd[2025]: time="2026-01-23T17:59:14.602579372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2sslj,Uid:70ddaa20-c13c-4013-a964-faa4fdbbdb8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\"" Jan 23 17:59:14.612388 containerd[2025]: time="2026-01-23T17:59:14.612340005Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 17:59:14.625923 containerd[2025]: time="2026-01-23T17:59:14.625836525Z" level=info msg="Container 65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:14.637377 containerd[2025]: time="2026-01-23T17:59:14.637232553Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4\"" Jan 23 17:59:14.638387 containerd[2025]: time="2026-01-23T17:59:14.638319225Z" level=info msg="StartContainer for \"65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4\"" Jan 23 17:59:14.641735 containerd[2025]: time="2026-01-23T17:59:14.641611665Z" level=info msg="connecting to shim 65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4" address="unix:///run/containerd/s/cf289da797bcc27318e7ddd2b0c59a3c3b139f0b2505db1adb76384d0bf493bb" protocol=ttrpc version=3 Jan 23 17:59:14.673209 systemd[1]: Started cri-containerd-65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4.scope - libcontainer container 65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4. Jan 23 17:59:14.731072 containerd[2025]: time="2026-01-23T17:59:14.730784061Z" level=info msg="StartContainer for \"65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4\" returns successfully" Jan 23 17:59:14.749688 systemd[1]: cri-containerd-65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4.scope: Deactivated successfully. Jan 23 17:59:14.754928 containerd[2025]: time="2026-01-23T17:59:14.754789977Z" level=info msg="received container exit event container_id:\"65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4\" id:\"65d6183dc9f32df141d92746de8a4292086afbbb2ae80ad27aedff3bbc5733e4\" pid:5338 exited_at:{seconds:1769191154 nanos:753530793}" Jan 23 17:59:14.885625 sshd[5309]: Connection closed by 68.220.241.50 port 40672 Jan 23 17:59:14.886734 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:14.889768 containerd[2025]: time="2026-01-23T17:59:14.889472986Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 17:59:14.901454 systemd[1]: sshd@24-172.31.24.80:22-68.220.241.50:40672.service: Deactivated successfully. Jan 23 17:59:14.902081 systemd-logind[2001]: Session 25 logged out. Waiting for processes to exit. Jan 23 17:59:14.908512 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 17:59:14.916960 systemd-logind[2001]: Removed session 25. Jan 23 17:59:14.920761 containerd[2025]: time="2026-01-23T17:59:14.919295386Z" level=info msg="Container 17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:14.933903 containerd[2025]: time="2026-01-23T17:59:14.933810418Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc\"" Jan 23 17:59:14.936358 containerd[2025]: time="2026-01-23T17:59:14.936279178Z" level=info msg="StartContainer for \"17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc\"" Jan 23 17:59:14.940188 containerd[2025]: time="2026-01-23T17:59:14.940055650Z" level=info msg="connecting to shim 17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc" address="unix:///run/containerd/s/cf289da797bcc27318e7ddd2b0c59a3c3b139f0b2505db1adb76384d0bf493bb" protocol=ttrpc version=3 Jan 23 17:59:14.971403 systemd[1]: Started sshd@25-172.31.24.80:22-68.220.241.50:40688.service - OpenSSH per-connection server daemon (68.220.241.50:40688). Jan 23 17:59:14.987422 systemd[1]: Started cri-containerd-17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc.scope - libcontainer container 17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc. Jan 23 17:59:15.071750 containerd[2025]: time="2026-01-23T17:59:15.071685691Z" level=info msg="StartContainer for \"17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc\" returns successfully" Jan 23 17:59:15.093579 systemd[1]: cri-containerd-17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc.scope: Deactivated successfully. Jan 23 17:59:15.097856 containerd[2025]: time="2026-01-23T17:59:15.097716067Z" level=info msg="received container exit event container_id:\"17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc\" id:\"17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc\" pid:5390 exited_at:{seconds:1769191155 nanos:96866767}" Jan 23 17:59:15.150887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f24ef0251d6e1114c736aefcb8db8726a808be504e0414a2cf9c0568204ccc-rootfs.mount: Deactivated successfully. Jan 23 17:59:15.505194 sshd[5388]: Accepted publickey for core from 68.220.241.50 port 40688 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:15.507690 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:15.517724 systemd-logind[2001]: New session 26 of user core. Jan 23 17:59:15.529170 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 17:59:15.909313 containerd[2025]: time="2026-01-23T17:59:15.909225935Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 17:59:15.944642 containerd[2025]: time="2026-01-23T17:59:15.944576423Z" level=info msg="Container 7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:15.956544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269895864.mount: Deactivated successfully. Jan 23 17:59:15.982995 containerd[2025]: time="2026-01-23T17:59:15.982843055Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389\"" Jan 23 17:59:15.985024 containerd[2025]: time="2026-01-23T17:59:15.984316043Z" level=info msg="StartContainer for \"7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389\"" Jan 23 17:59:15.990935 containerd[2025]: time="2026-01-23T17:59:15.990730463Z" level=info msg="connecting to shim 7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389" address="unix:///run/containerd/s/cf289da797bcc27318e7ddd2b0c59a3c3b139f0b2505db1adb76384d0bf493bb" protocol=ttrpc version=3 Jan 23 17:59:16.035186 systemd[1]: Started cri-containerd-7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389.scope - libcontainer container 7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389. Jan 23 17:59:16.169291 containerd[2025]: time="2026-01-23T17:59:16.168762572Z" level=info msg="StartContainer for \"7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389\" returns successfully" Jan 23 17:59:16.173582 systemd[1]: cri-containerd-7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389.scope: Deactivated successfully. Jan 23 17:59:16.182647 containerd[2025]: time="2026-01-23T17:59:16.182566592Z" level=info msg="received container exit event container_id:\"7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389\" id:\"7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389\" pid:5442 exited_at:{seconds:1769191156 nanos:181390532}" Jan 23 17:59:16.224352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d068f9856e0c9e95ef02589187e3afbf5cbd770ddda1871bff809b9544a2389-rootfs.mount: Deactivated successfully. Jan 23 17:59:16.915675 containerd[2025]: time="2026-01-23T17:59:16.915603900Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 17:59:16.937376 containerd[2025]: time="2026-01-23T17:59:16.935190984Z" level=info msg="Container aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:16.948120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528278272.mount: Deactivated successfully. Jan 23 17:59:16.957420 containerd[2025]: time="2026-01-23T17:59:16.957284604Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf\"" Jan 23 17:59:16.959251 containerd[2025]: time="2026-01-23T17:59:16.959117784Z" level=info msg="StartContainer for \"aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf\"" Jan 23 17:59:16.962151 containerd[2025]: time="2026-01-23T17:59:16.962029044Z" level=info msg="connecting to shim aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf" address="unix:///run/containerd/s/cf289da797bcc27318e7ddd2b0c59a3c3b139f0b2505db1adb76384d0bf493bb" protocol=ttrpc version=3 Jan 23 17:59:17.013386 systemd[1]: Started cri-containerd-aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf.scope - libcontainer container aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf. Jan 23 17:59:17.076085 systemd[1]: cri-containerd-aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf.scope: Deactivated successfully. Jan 23 17:59:17.081667 containerd[2025]: time="2026-01-23T17:59:17.079561593Z" level=info msg="received container exit event container_id:\"aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf\" id:\"aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf\" pid:5482 exited_at:{seconds:1769191157 nanos:78738321}" Jan 23 17:59:17.096794 containerd[2025]: time="2026-01-23T17:59:17.096735729Z" level=info msg="StartContainer for \"aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf\" returns successfully" Jan 23 17:59:17.133178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa0e2d31252a6e2a1bad7726de1540ff1913b7253850a6325b9e5ca7ecd2daf-rootfs.mount: Deactivated successfully. Jan 23 17:59:17.809234 kubelet[3549]: I0123 17:59:17.809176 3549 setters.go:543] "Node became not ready" node="ip-172-31-24-80" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T17:59:17Z","lastTransitionTime":"2026-01-23T17:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 17:59:17.927988 containerd[2025]: time="2026-01-23T17:59:17.927902065Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 17:59:17.974901 containerd[2025]: time="2026-01-23T17:59:17.973493365Z" level=info msg="Container 326a36798d9d5a6e74913210d1976b724280a2b55e87c7482ff90ad1f938f5fc: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:17.977920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248014441.mount: Deactivated successfully. Jan 23 17:59:17.988924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433386416.mount: Deactivated successfully. Jan 23 17:59:18.007601 containerd[2025]: time="2026-01-23T17:59:18.007519245Z" level=info msg="CreateContainer within sandbox \"0a68add00463981be189d0b8c85f658072181472295daf2938fcdd707a1fd0a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"326a36798d9d5a6e74913210d1976b724280a2b55e87c7482ff90ad1f938f5fc\"" Jan 23 17:59:18.012347 containerd[2025]: time="2026-01-23T17:59:18.012283053Z" level=info msg="StartContainer for \"326a36798d9d5a6e74913210d1976b724280a2b55e87c7482ff90ad1f938f5fc\"" Jan 23 17:59:18.018637 containerd[2025]: time="2026-01-23T17:59:18.018556737Z" level=info msg="connecting to shim 326a36798d9d5a6e74913210d1976b724280a2b55e87c7482ff90ad1f938f5fc" address="unix:///run/containerd/s/cf289da797bcc27318e7ddd2b0c59a3c3b139f0b2505db1adb76384d0bf493bb" protocol=ttrpc version=3 Jan 23 17:59:18.081213 systemd[1]: Started cri-containerd-326a36798d9d5a6e74913210d1976b724280a2b55e87c7482ff90ad1f938f5fc.scope - libcontainer container 326a36798d9d5a6e74913210d1976b724280a2b55e87c7482ff90ad1f938f5fc. Jan 23 17:59:18.161904 containerd[2025]: time="2026-01-23T17:59:18.161823358Z" level=info msg="StartContainer for \"326a36798d9d5a6e74913210d1976b724280a2b55e87c7482ff90ad1f938f5fc\" returns successfully" Jan 23 17:59:19.030922 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 17:59:19.042462 kubelet[3549]: I0123 17:59:19.042328 3549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2sslj" podStartSLOduration=6.042302219 podStartE2EDuration="6.042302219s" podCreationTimestamp="2026-01-23 17:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:19.035150999 +0000 UTC m=+115.088294037" watchObservedRunningTime="2026-01-23 17:59:19.042302219 +0000 UTC m=+115.095445185" Jan 23 17:59:23.476688 (udev-worker)[6055]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:23.481751 (udev-worker)[6056]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:23.489696 systemd-networkd[1840]: lxc_health: Link UP Jan 23 17:59:23.511249 systemd-networkd[1840]: lxc_health: Gained carrier Jan 23 17:59:24.237247 containerd[2025]: time="2026-01-23T17:59:24.236460808Z" level=info msg="StopPodSandbox for \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\"" Jan 23 17:59:24.237247 containerd[2025]: time="2026-01-23T17:59:24.236678956Z" level=info msg="TearDown network for sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" successfully" Jan 23 17:59:24.237247 containerd[2025]: time="2026-01-23T17:59:24.236703148Z" level=info msg="StopPodSandbox for \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" returns successfully" Jan 23 17:59:24.242157 containerd[2025]: time="2026-01-23T17:59:24.242019832Z" level=info msg="RemovePodSandbox for \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\"" Jan 23 17:59:24.242407 containerd[2025]: time="2026-01-23T17:59:24.242129800Z" level=info msg="Forcibly stopping sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\"" Jan 23 17:59:24.242781 containerd[2025]: time="2026-01-23T17:59:24.242700100Z" level=info msg="TearDown network for sandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" successfully" Jan 23 17:59:24.248187 containerd[2025]: time="2026-01-23T17:59:24.248028592Z" level=info msg="Ensure that sandbox d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9 in task-service has been cleanup successfully" Jan 23 17:59:24.261480 containerd[2025]: time="2026-01-23T17:59:24.261405820Z" level=info msg="RemovePodSandbox \"d2eab2f93666c6149ee4af351f57bc5feeb96b5f91935b9965d9d3ddfccef2d9\" returns successfully" Jan 23 17:59:24.264638 containerd[2025]: time="2026-01-23T17:59:24.264193192Z" level=info msg="StopPodSandbox for \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\"" Jan 23 17:59:24.264638 containerd[2025]: time="2026-01-23T17:59:24.264469912Z" level=info msg="TearDown network for sandbox \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" successfully" Jan 23 17:59:24.264638 containerd[2025]: time="2026-01-23T17:59:24.264500584Z" level=info msg="StopPodSandbox for \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" returns successfully" Jan 23 17:59:24.265233 containerd[2025]: time="2026-01-23T17:59:24.265174756Z" level=info msg="RemovePodSandbox for \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\"" Jan 23 17:59:24.265345 containerd[2025]: time="2026-01-23T17:59:24.265241872Z" level=info msg="Forcibly stopping sandbox \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\"" Jan 23 17:59:24.265456 containerd[2025]: time="2026-01-23T17:59:24.265413064Z" level=info msg="TearDown network for sandbox \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" successfully" Jan 23 17:59:24.267890 containerd[2025]: time="2026-01-23T17:59:24.267785680Z" level=info msg="Ensure that sandbox 07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762 in task-service has been cleanup successfully" Jan 23 17:59:24.277261 containerd[2025]: time="2026-01-23T17:59:24.277183709Z" level=info msg="RemovePodSandbox \"07d520df9825264bb79784059f2147470b9716773153f5fd2a526c555ab39762\" returns successfully" Jan 23 17:59:24.937000 systemd-networkd[1840]: lxc_health: Gained IPv6LL Jan 23 17:59:27.006965 ntpd[2226]: Listen normally on 13 lxc_health [fe80::205e:47ff:fe46:ceb6%14]:123 Jan 23 17:59:27.007463 ntpd[2226]: 23 Jan 17:59:27 ntpd[2226]: Listen normally on 13 lxc_health [fe80::205e:47ff:fe46:ceb6%14]:123 Jan 23 17:59:29.410651 kubelet[3549]: E0123 17:59:29.410604 3549 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37568->127.0.0.1:35915: write tcp 127.0.0.1:37568->127.0.0.1:35915: write: broken pipe Jan 23 17:59:31.747928 sshd[5423]: Connection closed by 68.220.241.50 port 40688 Jan 23 17:59:31.753185 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:31.764405 systemd[1]: sshd@25-172.31.24.80:22-68.220.241.50:40688.service: Deactivated successfully. Jan 23 17:59:31.773035 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 17:59:31.775423 systemd-logind[2001]: Session 26 logged out. Waiting for processes to exit. Jan 23 17:59:31.781759 systemd-logind[2001]: Removed session 26. Jan 23 18:00:09.189797 systemd[1]: cri-containerd-f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8.scope: Deactivated successfully. Jan 23 18:00:09.190761 systemd[1]: cri-containerd-f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8.scope: Consumed 4.298s CPU time, 55.3M memory peak. Jan 23 18:00:09.196025 containerd[2025]: time="2026-01-23T18:00:09.195862068Z" level=info msg="received container exit event container_id:\"f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8\" id:\"f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8\" pid:3208 exit_status:1 exited_at:{seconds:1769191209 nanos:193221060}" Jan 23 18:00:09.244416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8-rootfs.mount: Deactivated successfully. Jan 23 18:00:10.108943 kubelet[3549]: I0123 18:00:10.108197 3549 scope.go:117] "RemoveContainer" containerID="f884db5e007bc4704b324ef095cab2798f499521b3843dddfb5d7b811737bde8" Jan 23 18:00:10.113327 containerd[2025]: time="2026-01-23T18:00:10.112789512Z" level=info msg="CreateContainer within sandbox \"6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:00:10.128672 containerd[2025]: time="2026-01-23T18:00:10.128163180Z" level=info msg="Container 2b5793c6a02edebef42babef815695a382fec135f16bc37c52b4dee5f3c0bb35: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:10.145498 containerd[2025]: time="2026-01-23T18:00:10.145423608Z" level=info msg="CreateContainer within sandbox \"6fe30b7ae1d33ca6f9b2cb8c79a4731bd2224161c5815cb6324cfb954abfe698\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2b5793c6a02edebef42babef815695a382fec135f16bc37c52b4dee5f3c0bb35\"" Jan 23 18:00:10.146333 containerd[2025]: time="2026-01-23T18:00:10.146284140Z" level=info msg="StartContainer for \"2b5793c6a02edebef42babef815695a382fec135f16bc37c52b4dee5f3c0bb35\"" Jan 23 18:00:10.148538 containerd[2025]: time="2026-01-23T18:00:10.148469340Z" level=info msg="connecting to shim 2b5793c6a02edebef42babef815695a382fec135f16bc37c52b4dee5f3c0bb35" address="unix:///run/containerd/s/149a3be8b8665d4a88ef6ebe26f73051546d25abd9bc62f292603369bbcbd8d7" protocol=ttrpc version=3 Jan 23 18:00:10.190175 systemd[1]: Started cri-containerd-2b5793c6a02edebef42babef815695a382fec135f16bc37c52b4dee5f3c0bb35.scope - libcontainer container 2b5793c6a02edebef42babef815695a382fec135f16bc37c52b4dee5f3c0bb35. Jan 23 18:00:10.276418 containerd[2025]: time="2026-01-23T18:00:10.276362773Z" level=info msg="StartContainer for \"2b5793c6a02edebef42babef815695a382fec135f16bc37c52b4dee5f3c0bb35\" returns successfully" Jan 23 18:00:15.334614 systemd[1]: cri-containerd-b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4.scope: Deactivated successfully. Jan 23 18:00:15.335187 systemd[1]: cri-containerd-b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4.scope: Consumed 5.428s CPU time, 20.9M memory peak. Jan 23 18:00:15.341091 containerd[2025]: time="2026-01-23T18:00:15.341024370Z" level=info msg="received container exit event container_id:\"b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4\" id:\"b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4\" pid:3187 exit_status:1 exited_at:{seconds:1769191215 nanos:340360338}" Jan 23 18:00:15.400811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4-rootfs.mount: Deactivated successfully. Jan 23 18:00:16.134684 kubelet[3549]: I0123 18:00:16.134640 3549 scope.go:117] "RemoveContainer" containerID="b6b91c4e5e7ecd890eed514a901684f96389acf3fdcffb19cb2e785ed0a064f4" Jan 23 18:00:16.138801 containerd[2025]: time="2026-01-23T18:00:16.138746598Z" level=info msg="CreateContainer within sandbox \"418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:00:16.154917 containerd[2025]: time="2026-01-23T18:00:16.154131594Z" level=info msg="Container b63ebf721c83459084b2759b186f259dbb68b2f98abc14146a93e2fab418a609: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:16.172576 containerd[2025]: time="2026-01-23T18:00:16.172497810Z" level=info msg="CreateContainer within sandbox \"418755a75e4f8817b66a30560b386c9ac6987906c18caf2067c5f040bb5aa4ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b63ebf721c83459084b2759b186f259dbb68b2f98abc14146a93e2fab418a609\"" Jan 23 18:00:16.173458 containerd[2025]: time="2026-01-23T18:00:16.173399106Z" level=info msg="StartContainer for \"b63ebf721c83459084b2759b186f259dbb68b2f98abc14146a93e2fab418a609\"" Jan 23 18:00:16.175544 containerd[2025]: time="2026-01-23T18:00:16.175485462Z" level=info msg="connecting to shim b63ebf721c83459084b2759b186f259dbb68b2f98abc14146a93e2fab418a609" address="unix:///run/containerd/s/f58dc6b90715908e55c8207fbef236881e4eb65fa8c67220fd9534b144472b89" protocol=ttrpc version=3 Jan 23 18:00:16.223192 systemd[1]: Started cri-containerd-b63ebf721c83459084b2759b186f259dbb68b2f98abc14146a93e2fab418a609.scope - libcontainer container b63ebf721c83459084b2759b186f259dbb68b2f98abc14146a93e2fab418a609. Jan 23 18:00:16.312058 containerd[2025]: time="2026-01-23T18:00:16.312006295Z" level=info msg="StartContainer for \"b63ebf721c83459084b2759b186f259dbb68b2f98abc14146a93e2fab418a609\" returns successfully" Jan 23 18:00:17.895179 kubelet[3549]: E0123 18:00:17.894639 3549 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s\": context deadline exceeded" Jan 23 18:00:27.895213 kubelet[3549]: E0123 18:00:27.895089 3549 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-80?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"