Jan 23 17:56:14.156536 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 17:56:14.156582 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:56:14.156606 kernel: KASLR disabled due to lack of seed Jan 23 17:56:14.156622 kernel: efi: EFI v2.7 by EDK II Jan 23 17:56:14.156638 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Jan 23 17:56:14.156654 kernel: secureboot: Secure boot disabled Jan 23 17:56:14.156671 kernel: ACPI: Early table checksum verification disabled Jan 23 17:56:14.156687 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 17:56:14.156702 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 17:56:14.156717 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 17:56:14.156733 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 17:56:14.156752 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 17:56:14.156768 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 17:56:14.156784 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 17:56:14.156802 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 17:56:14.156817 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 17:56:14.156838 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 17:56:14.156854 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 17:56:14.156870 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 17:56:14.156886 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 17:56:14.156902 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 17:56:14.156918 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 17:56:14.156934 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:56:14.156950 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:56:14.156967 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 17:56:14.156983 kernel: Zone ranges: Jan 23 17:56:14.157000 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:56:14.157020 kernel: DMA32 empty Jan 23 17:56:14.157036 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 17:56:14.157052 kernel: Device empty Jan 23 17:56:14.157067 kernel: Movable zone start for each node Jan 23 17:56:14.157083 kernel: Early memory node ranges Jan 23 17:56:14.157099 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 17:56:14.157115 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 17:56:14.157131 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 17:56:14.157148 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 17:56:14.157188 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 17:56:14.157208 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 17:56:14.157224 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 17:56:14.157247 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 17:56:14.157270 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:56:14.157287 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 17:56:14.157304 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 17:56:14.157321 kernel: psci: probing for conduit method from ACPI. Jan 23 17:56:14.157342 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 17:56:14.157359 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:56:14.157376 kernel: psci: Trusted OS migration not required Jan 23 17:56:14.157392 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:56:14.157410 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 17:56:14.157427 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:56:14.157443 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:56:14.157461 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:56:14.157478 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:56:14.157495 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:56:14.157512 kernel: CPU features: detected: Spectre-v2 Jan 23 17:56:14.157532 kernel: CPU features: detected: Spectre-v3a Jan 23 17:56:14.157549 kernel: CPU features: detected: Spectre-BHB Jan 23 17:56:14.157565 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 17:56:14.157582 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 17:56:14.157598 kernel: alternatives: applying boot alternatives Jan 23 17:56:14.157618 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:56:14.157636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:56:14.157652 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:56:14.157669 kernel: Fallback order for Node 0: 0 Jan 23 17:56:14.157686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 17:56:14.157703 kernel: Policy zone: Normal Jan 23 17:56:14.157724 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:56:14.157740 kernel: software IO TLB: area num 2. Jan 23 17:56:14.157757 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jan 23 17:56:14.157774 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:56:14.157791 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:56:14.157809 kernel: rcu: RCU event tracing is enabled. Jan 23 17:56:14.157826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:56:14.157844 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:56:14.157861 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:56:14.157878 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:56:14.157895 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:56:14.157915 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:56:14.157933 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:56:14.157949 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:56:14.157966 kernel: GICv3: 96 SPIs implemented Jan 23 17:56:14.157982 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:56:14.157999 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:56:14.158015 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 17:56:14.158032 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:56:14.158049 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 17:56:14.158066 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 17:56:14.158083 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:56:14.158100 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:56:14.158122 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 17:56:14.158138 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 17:56:14.158155 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 17:56:14.159328 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:56:14.159348 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 17:56:14.159366 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 17:56:14.159383 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 17:56:14.159400 kernel: Console: colour dummy device 80x25 Jan 23 17:56:14.159418 kernel: printk: legacy console [tty1] enabled Jan 23 17:56:14.159435 kernel: ACPI: Core revision 20240827 Jan 23 17:56:14.159452 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 17:56:14.159478 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:56:14.159496 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:56:14.159513 kernel: landlock: Up and running. Jan 23 17:56:14.159530 kernel: SELinux: Initializing. Jan 23 17:56:14.159547 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:56:14.159565 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:56:14.159582 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:56:14.159600 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:56:14.159621 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:56:14.159638 kernel: Remapping and enabling EFI services. Jan 23 17:56:14.159656 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:56:14.159673 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:56:14.159691 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 17:56:14.159709 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 17:56:14.159781 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 17:56:14.159800 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:56:14.159817 kernel: SMP: Total of 2 processors activated. Jan 23 17:56:14.159841 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:56:14.159871 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:56:14.159890 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 17:56:14.159912 kernel: CPU features: detected: CRC32 instructions Jan 23 17:56:14.159930 kernel: alternatives: applying system-wide alternatives Jan 23 17:56:14.159949 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 23 17:56:14.159968 kernel: devtmpfs: initialized Jan 23 17:56:14.159986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:56:14.160009 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:56:14.160027 kernel: 16880 pages in range for non-PLT usage Jan 23 17:56:14.160045 kernel: 508400 pages in range for PLT usage Jan 23 17:56:14.160062 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:56:14.160080 kernel: SMBIOS 3.0.0 present. Jan 23 17:56:14.160098 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 17:56:14.160116 kernel: DMI: Memory slots populated: 0/0 Jan 23 17:56:14.160133 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:56:14.160151 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:56:14.160240 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:56:14.160261 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:56:14.160279 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:56:14.160297 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Jan 23 17:56:14.160315 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:56:14.160332 kernel: cpuidle: using governor menu Jan 23 17:56:14.160350 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:56:14.160369 kernel: ASID allocator initialised with 65536 entries Jan 23 17:56:14.160387 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:56:14.160410 kernel: Serial: AMBA PL011 UART driver Jan 23 17:56:14.160428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:56:14.160446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:56:14.160464 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:56:14.160482 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:56:14.160500 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:56:14.160518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:56:14.160536 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:56:14.160554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:56:14.160576 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:56:14.160594 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:56:14.160611 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:56:14.160629 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:56:14.160647 kernel: ACPI: Interpreter enabled Jan 23 17:56:14.160665 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:56:14.160683 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:56:14.160701 kernel: ACPI: CPU0 has been hot-added Jan 23 17:56:14.160719 kernel: ACPI: CPU1 has been hot-added Jan 23 17:56:14.160740 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 17:56:14.163267 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:56:14.163537 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:56:14.163732 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:56:14.163918 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 17:56:14.164102 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 17:56:14.164126 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 17:56:14.164155 kernel: acpiphp: Slot [1] registered Jan 23 17:56:14.164227 kernel: acpiphp: Slot [2] registered Jan 23 17:56:14.164247 kernel: acpiphp: Slot [3] registered Jan 23 17:56:14.164266 kernel: acpiphp: Slot [4] registered Jan 23 17:56:14.164284 kernel: acpiphp: Slot [5] registered Jan 23 17:56:14.164302 kernel: acpiphp: Slot [6] registered Jan 23 17:56:14.164320 kernel: acpiphp: Slot [7] registered Jan 23 17:56:14.164337 kernel: acpiphp: Slot [8] registered Jan 23 17:56:14.164355 kernel: acpiphp: Slot [9] registered Jan 23 17:56:14.164373 kernel: acpiphp: Slot [10] registered Jan 23 17:56:14.164397 kernel: acpiphp: Slot [11] registered Jan 23 17:56:14.164415 kernel: acpiphp: Slot [12] registered Jan 23 17:56:14.164433 kernel: acpiphp: Slot [13] registered Jan 23 17:56:14.164450 kernel: acpiphp: Slot [14] registered Jan 23 17:56:14.164468 kernel: acpiphp: Slot [15] registered Jan 23 17:56:14.164486 kernel: acpiphp: Slot [16] registered Jan 23 17:56:14.164503 kernel: acpiphp: Slot [17] registered Jan 23 17:56:14.164521 kernel: acpiphp: Slot [18] registered Jan 23 17:56:14.164539 kernel: acpiphp: Slot [19] registered Jan 23 17:56:14.164560 kernel: acpiphp: Slot [20] registered Jan 23 17:56:14.164578 kernel: acpiphp: Slot [21] registered Jan 23 17:56:14.164596 kernel: acpiphp: Slot [22] registered Jan 23 17:56:14.164614 kernel: acpiphp: Slot [23] registered Jan 23 17:56:14.164632 kernel: acpiphp: Slot [24] registered Jan 23 17:56:14.164650 kernel: acpiphp: Slot [25] registered Jan 23 17:56:14.164667 kernel: acpiphp: Slot [26] registered Jan 23 17:56:14.164685 kernel: acpiphp: Slot [27] registered Jan 23 17:56:14.164703 kernel: acpiphp: Slot [28] registered Jan 23 17:56:14.164720 kernel: acpiphp: Slot [29] registered Jan 23 17:56:14.164742 kernel: acpiphp: Slot [30] registered Jan 23 17:56:14.164760 kernel: acpiphp: Slot [31] registered Jan 23 17:56:14.164778 kernel: PCI host bridge to bus 0000:00 Jan 23 17:56:14.164973 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 17:56:14.165144 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:56:14.165376 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 17:56:14.165546 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 17:56:14.165773 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:56:14.166003 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 17:56:14.167967 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 17:56:14.168218 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 17:56:14.168417 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 17:56:14.168609 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:56:14.168817 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 17:56:14.169008 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 17:56:14.171300 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 17:56:14.171560 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 17:56:14.171754 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:56:14.171931 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 17:56:14.172100 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:56:14.172317 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 17:56:14.172343 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:56:14.172362 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:56:14.172381 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:56:14.172399 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:56:14.172417 kernel: iommu: Default domain type: Translated Jan 23 17:56:14.172435 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:56:14.172453 kernel: efivars: Registered efivars operations Jan 23 17:56:14.172471 kernel: vgaarb: loaded Jan 23 17:56:14.172495 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:56:14.172514 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:56:14.172532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:56:14.172549 kernel: pnp: PnP ACPI init Jan 23 17:56:14.172760 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 17:56:14.172788 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:56:14.172807 kernel: NET: Registered PF_INET protocol family Jan 23 17:56:14.172826 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:56:14.172850 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:56:14.172869 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:56:14.172888 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:56:14.172906 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:56:14.172924 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:56:14.172942 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:56:14.172960 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:56:14.172979 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:56:14.172997 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:56:14.173019 kernel: kvm [1]: HYP mode not available Jan 23 17:56:14.173037 kernel: Initialise system trusted keyrings Jan 23 17:56:14.173055 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:56:14.173072 kernel: Key type asymmetric registered Jan 23 17:56:14.173091 kernel: Asymmetric key parser 'x509' registered Jan 23 17:56:14.173110 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:56:14.173128 kernel: io scheduler mq-deadline registered Jan 23 17:56:14.173147 kernel: io scheduler kyber registered Jan 23 17:56:14.175214 kernel: io scheduler bfq registered Jan 23 17:56:14.175503 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 17:56:14.175532 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:56:14.175552 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:56:14.175570 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 17:56:14.175588 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 17:56:14.175607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:56:14.175626 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:56:14.175828 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 17:56:14.175860 kernel: printk: legacy console [ttyS0] disabled Jan 23 17:56:14.175880 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 17:56:14.175898 kernel: printk: legacy console [ttyS0] enabled Jan 23 17:56:14.175916 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 17:56:14.175934 kernel: thunder_xcv, ver 1.0 Jan 23 17:56:14.175953 kernel: thunder_bgx, ver 1.0 Jan 23 17:56:14.175971 kernel: nicpf, ver 1.0 Jan 23 17:56:14.175989 kernel: nicvf, ver 1.0 Jan 23 17:56:14.176238 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:56:14.176435 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:56:13 UTC (1769190973) Jan 23 17:56:14.176461 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:56:14.176479 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 17:56:14.176497 kernel: watchdog: NMI not fully supported Jan 23 17:56:14.176515 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:56:14.176534 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:56:14.176551 kernel: Segment Routing with IPv6 Jan 23 17:56:14.176569 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:56:14.176587 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:56:14.176610 kernel: Key type dns_resolver registered Jan 23 17:56:14.176628 kernel: registered taskstats version 1 Jan 23 17:56:14.176646 kernel: Loading compiled-in X.509 certificates Jan 23 17:56:14.176665 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:56:14.176683 kernel: Demotion targets for Node 0: null Jan 23 17:56:14.176701 kernel: Key type .fscrypt registered Jan 23 17:56:14.176719 kernel: Key type fscrypt-provisioning registered Jan 23 17:56:14.176736 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:56:14.176754 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:56:14.176777 kernel: ima: No architecture policies found Jan 23 17:56:14.176795 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:56:14.176813 kernel: clk: Disabling unused clocks Jan 23 17:56:14.176831 kernel: PM: genpd: Disabling unused power domains Jan 23 17:56:14.176849 kernel: Warning: unable to open an initial console. Jan 23 17:56:14.176867 kernel: Freeing unused kernel memory: 39552K Jan 23 17:56:14.176885 kernel: Run /init as init process Jan 23 17:56:14.176903 kernel: with arguments: Jan 23 17:56:14.176921 kernel: /init Jan 23 17:56:14.176942 kernel: with environment: Jan 23 17:56:14.176960 kernel: HOME=/ Jan 23 17:56:14.176978 kernel: TERM=linux Jan 23 17:56:14.176998 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:56:14.177022 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:56:14.177043 systemd[1]: Detected virtualization amazon. Jan 23 17:56:14.177062 systemd[1]: Detected architecture arm64. Jan 23 17:56:14.177086 systemd[1]: Running in initrd. Jan 23 17:56:14.177105 systemd[1]: No hostname configured, using default hostname. Jan 23 17:56:14.177124 systemd[1]: Hostname set to . Jan 23 17:56:14.177143 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:56:14.177265 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:56:14.177292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:14.177312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:14.177333 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:56:14.177359 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:56:14.177380 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:56:14.177401 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:56:14.177424 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:56:14.177445 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:56:14.177464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:14.177485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:14.177509 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:56:14.177529 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:56:14.177548 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:56:14.177567 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:56:14.177586 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:56:14.177606 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:56:14.177625 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:56:14.177644 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:56:14.177664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:14.177688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:14.177804 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:14.177826 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:56:14.177847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:56:14.177867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:56:14.177887 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:56:14.177907 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:56:14.177927 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:56:14.177952 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:56:14.177972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:56:14.177991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:14.178011 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:56:14.178032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:14.178056 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:56:14.178076 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:56:14.178143 systemd-journald[259]: Collecting audit messages is disabled. Jan 23 17:56:14.178213 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:56:14.178239 kernel: Bridge firewalling registered Jan 23 17:56:14.178259 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:14.178280 systemd-journald[259]: Journal started Jan 23 17:56:14.178317 systemd-journald[259]: Runtime Journal (/run/log/journal/ec29bbda97382920a6394df04c9b063b) is 8M, max 75.3M, 67.3M free. Jan 23 17:56:14.113225 systemd-modules-load[260]: Inserted module 'overlay' Jan 23 17:56:14.159803 systemd-modules-load[260]: Inserted module 'br_netfilter' Jan 23 17:56:14.189285 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:56:14.198581 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:56:14.199458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:14.206824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:56:14.217500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:56:14.228578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:56:14.240424 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:56:14.257229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:14.277109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:14.278443 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:56:14.290284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:14.299417 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:56:14.315573 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:56:14.326801 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:56:14.369800 dracut-cmdline[302]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:56:14.413544 systemd-resolved[297]: Positive Trust Anchors: Jan 23 17:56:14.419361 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:56:14.422755 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:56:14.531200 kernel: SCSI subsystem initialized Jan 23 17:56:14.539198 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:56:14.551223 kernel: iscsi: registered transport (tcp) Jan 23 17:56:14.573276 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:56:14.573373 kernel: QLogic iSCSI HBA Driver Jan 23 17:56:14.607348 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:56:14.649605 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:14.662673 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:56:14.692209 kernel: random: crng init done Jan 23 17:56:14.692493 systemd-resolved[297]: Defaulting to hostname 'linux'. Jan 23 17:56:14.697223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:56:14.702998 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:14.758647 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:56:14.765543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:56:14.852218 kernel: raid6: neonx8 gen() 6553 MB/s Jan 23 17:56:14.869219 kernel: raid6: neonx4 gen() 6572 MB/s Jan 23 17:56:14.886212 kernel: raid6: neonx2 gen() 5449 MB/s Jan 23 17:56:14.903216 kernel: raid6: neonx1 gen() 3947 MB/s Jan 23 17:56:14.920210 kernel: raid6: int64x8 gen() 3663 MB/s Jan 23 17:56:14.937215 kernel: raid6: int64x4 gen() 3717 MB/s Jan 23 17:56:14.954211 kernel: raid6: int64x2 gen() 3607 MB/s Jan 23 17:56:14.972323 kernel: raid6: int64x1 gen() 2758 MB/s Jan 23 17:56:14.972380 kernel: raid6: using algorithm neonx4 gen() 6572 MB/s Jan 23 17:56:14.991294 kernel: raid6: .... xor() 4636 MB/s, rmw enabled Jan 23 17:56:14.991358 kernel: raid6: using neon recovery algorithm Jan 23 17:56:15.000120 kernel: xor: measuring software checksum speed Jan 23 17:56:15.000221 kernel: 8regs : 12275 MB/sec Jan 23 17:56:15.002724 kernel: 32regs : 12417 MB/sec Jan 23 17:56:15.002764 kernel: arm64_neon : 9119 MB/sec Jan 23 17:56:15.002788 kernel: xor: using function: 32regs (12417 MB/sec) Jan 23 17:56:15.095224 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:56:15.106575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:56:15.113676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:15.164869 systemd-udevd[510]: Using default interface naming scheme 'v255'. Jan 23 17:56:15.175107 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:15.192071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:56:15.231276 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Jan 23 17:56:15.276386 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:56:15.285917 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:56:15.416527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:15.426990 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:56:15.603189 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:56:15.614629 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 17:56:15.610995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:56:15.611300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:15.614740 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:15.620361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:15.634462 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:56:15.644576 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:56:15.644647 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 17:56:15.647927 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 17:56:15.648308 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 17:56:15.654399 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 17:56:15.661205 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:b1:79:5f:1f:85 Jan 23 17:56:15.666652 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:56:15.666722 kernel: GPT:9289727 != 33554431 Jan 23 17:56:15.668115 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:56:15.670111 kernel: GPT:9289727 != 33554431 Jan 23 17:56:15.670199 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:56:15.671186 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:56:15.676751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:15.678059 (udev-worker)[553]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:56:15.713314 kernel: nvme nvme0: using unchecked data buffer Jan 23 17:56:15.812582 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 17:56:15.855531 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 17:56:15.859110 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 17:56:15.863977 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:56:15.901663 disk-uuid[678]: Primary Header is updated. Jan 23 17:56:15.901663 disk-uuid[678]: Secondary Entries is updated. Jan 23 17:56:15.901663 disk-uuid[678]: Secondary Header is updated. Jan 23 17:56:15.965564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 17:56:16.007092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:56:16.312446 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:56:16.326312 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:56:16.331850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:16.334693 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:56:16.343220 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:56:16.379419 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:56:16.932109 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:56:16.933450 disk-uuid[680]: The operation has completed successfully. Jan 23 17:56:17.137839 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:56:17.138062 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:56:17.223098 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:56:17.247631 sh[958]: Success Jan 23 17:56:17.277089 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:56:17.277205 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:56:17.279330 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:56:17.293221 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:56:17.402057 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:56:17.407821 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:56:17.426210 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:56:17.447213 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (981) Jan 23 17:56:17.452069 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:56:17.452136 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:17.494765 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:56:17.494860 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:56:17.494888 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:56:17.515673 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:56:17.520391 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:56:17.525597 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:56:17.531294 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:56:17.540435 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:56:17.598252 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Jan 23 17:56:17.602700 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:17.602776 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:17.620260 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:56:17.620333 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:56:17.629232 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:17.633486 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:56:17.639742 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:56:17.734271 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:56:17.745667 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:56:17.815925 systemd-networkd[1150]: lo: Link UP Jan 23 17:56:17.816427 systemd-networkd[1150]: lo: Gained carrier Jan 23 17:56:17.820791 systemd-networkd[1150]: Enumeration completed Jan 23 17:56:17.822608 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:17.822616 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:56:17.824889 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:56:17.837764 systemd[1]: Reached target network.target - Network. Jan 23 17:56:17.843338 systemd-networkd[1150]: eth0: Link UP Jan 23 17:56:17.843352 systemd-networkd[1150]: eth0: Gained carrier Jan 23 17:56:17.843374 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:17.864279 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.17.161/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:56:18.115694 ignition[1075]: Ignition 2.22.0 Jan 23 17:56:18.115723 ignition[1075]: Stage: fetch-offline Jan 23 17:56:18.119355 ignition[1075]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:18.119394 ignition[1075]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:18.124148 ignition[1075]: Ignition finished successfully Jan 23 17:56:18.127573 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:56:18.134881 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:56:18.185453 ignition[1162]: Ignition 2.22.0 Jan 23 17:56:18.185982 ignition[1162]: Stage: fetch Jan 23 17:56:18.186948 ignition[1162]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:18.186973 ignition[1162]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:18.187221 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:18.206997 ignition[1162]: PUT result: OK Jan 23 17:56:18.210833 ignition[1162]: parsed url from cmdline: "" Jan 23 17:56:18.210852 ignition[1162]: no config URL provided Jan 23 17:56:18.210868 ignition[1162]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:56:18.210893 ignition[1162]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:56:18.210925 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:18.212929 ignition[1162]: PUT result: OK Jan 23 17:56:18.213007 ignition[1162]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 17:56:18.218096 ignition[1162]: GET result: OK Jan 23 17:56:18.218338 ignition[1162]: parsing config with SHA512: 64635f92dbc235e7d52f976c2c6f9e3b14d5c3c6ad5690d872960ccc6de05fc5deb6897667aad3eb7bff3258f09f0ee826eb373b53074e2b4396a55c3c586c06 Jan 23 17:56:18.234456 unknown[1162]: fetched base config from "system" Jan 23 17:56:18.234505 unknown[1162]: fetched base config from "system" Jan 23 17:56:18.236114 ignition[1162]: fetch: fetch complete Jan 23 17:56:18.234521 unknown[1162]: fetched user config from "aws" Jan 23 17:56:18.236127 ignition[1162]: fetch: fetch passed Jan 23 17:56:18.245580 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:56:18.236260 ignition[1162]: Ignition finished successfully Jan 23 17:56:18.252842 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:56:18.317082 ignition[1168]: Ignition 2.22.0 Jan 23 17:56:18.317115 ignition[1168]: Stage: kargs Jan 23 17:56:18.318484 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:18.318868 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:18.319036 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:18.322894 ignition[1168]: PUT result: OK Jan 23 17:56:18.336908 ignition[1168]: kargs: kargs passed Jan 23 17:56:18.337239 ignition[1168]: Ignition finished successfully Jan 23 17:56:18.346089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:56:18.355495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:56:18.402505 ignition[1174]: Ignition 2.22.0 Jan 23 17:56:18.403034 ignition[1174]: Stage: disks Jan 23 17:56:18.403671 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:18.403696 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:18.403830 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:18.415331 ignition[1174]: PUT result: OK Jan 23 17:56:18.420391 ignition[1174]: disks: disks passed Jan 23 17:56:18.421520 ignition[1174]: Ignition finished successfully Jan 23 17:56:18.426661 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:56:18.429761 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:56:18.434587 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:56:18.437571 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:56:18.440546 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:56:18.448568 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:56:18.457335 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:56:18.525385 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 17:56:18.536280 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:56:18.544914 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:56:18.683202 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:56:18.684611 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:56:18.689218 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:56:18.695707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:56:18.699716 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:56:18.705865 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:56:18.710338 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:56:18.712043 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:56:18.733952 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:56:18.740676 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:56:18.754603 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jan 23 17:56:18.754666 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:18.760207 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:18.768147 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:56:18.768247 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:56:18.772377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:56:18.957445 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:56:18.970522 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:56:18.979144 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:56:18.988445 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:56:19.162398 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:56:19.173352 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:56:19.178866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:56:19.207276 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:56:19.210775 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:19.239879 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:56:19.263200 ignition[1314]: INFO : Ignition 2.22.0 Jan 23 17:56:19.263200 ignition[1314]: INFO : Stage: mount Jan 23 17:56:19.267230 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:19.267230 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:19.267230 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:19.278475 ignition[1314]: INFO : PUT result: OK Jan 23 17:56:19.286211 ignition[1314]: INFO : mount: mount passed Jan 23 17:56:19.288040 ignition[1314]: INFO : Ignition finished successfully Jan 23 17:56:19.294244 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:56:19.299304 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:56:19.645534 systemd-networkd[1150]: eth0: Gained IPv6LL Jan 23 17:56:19.688036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:56:19.732370 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Jan 23 17:56:19.736497 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:19.736675 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:19.743715 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:56:19.743804 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:56:19.747286 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:56:19.800861 ignition[1342]: INFO : Ignition 2.22.0 Jan 23 17:56:19.803085 ignition[1342]: INFO : Stage: files Jan 23 17:56:19.805605 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:19.808016 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:19.810789 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:19.814440 ignition[1342]: INFO : PUT result: OK Jan 23 17:56:19.822386 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:56:19.827598 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:56:19.827598 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:56:19.838762 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:56:19.842184 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:56:19.847812 unknown[1342]: wrote ssh authorized keys file for user: core Jan 23 17:56:19.850466 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:56:19.854271 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:56:19.858716 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 17:56:19.938068 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:56:20.194286 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:56:20.194286 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:56:20.194286 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 17:56:20.402525 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 17:56:20.528094 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:56:20.532097 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:56:20.564938 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:56:20.564938 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:56:20.564938 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:20.564938 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:20.564938 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:20.564938 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 17:56:20.958329 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 17:56:21.317239 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:21.322503 ignition[1342]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 17:56:21.322503 ignition[1342]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:56:21.329688 ignition[1342]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:56:21.329688 ignition[1342]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 17:56:21.329688 ignition[1342]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:56:21.329688 ignition[1342]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:56:21.329688 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:56:21.329688 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:56:21.329688 ignition[1342]: INFO : files: files passed Jan 23 17:56:21.329688 ignition[1342]: INFO : Ignition finished successfully Jan 23 17:56:21.344713 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:56:21.353583 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:56:21.362708 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:56:21.395581 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:56:21.396638 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:56:21.414333 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:21.422717 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:21.418927 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:56:21.429985 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:21.437134 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:56:21.444132 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:56:21.548224 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:56:21.549360 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:56:21.554332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:56:21.557995 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:56:21.565833 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:56:21.575439 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:56:21.619634 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:56:21.628005 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:56:21.684474 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:21.688117 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:21.694159 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:56:21.695890 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:56:21.696295 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:56:21.705420 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:56:21.708409 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:56:21.715001 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:56:21.718511 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:56:21.727270 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:56:21.732864 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:56:21.737085 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:56:21.741954 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:56:21.751497 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:56:21.757719 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:56:21.762157 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:56:21.768811 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:56:21.769992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:56:21.777305 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:21.783034 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:21.787153 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:56:21.791602 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:21.798601 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:56:21.798880 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:56:21.802844 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:56:21.803270 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:56:21.808711 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:56:21.809005 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:56:21.815130 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:56:21.830557 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:56:21.836385 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:56:21.836761 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:21.841019 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:56:21.841300 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:56:21.865884 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:56:21.870293 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:56:21.901719 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:56:21.911851 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:56:21.914135 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:56:21.926224 ignition[1396]: INFO : Ignition 2.22.0 Jan 23 17:56:21.926224 ignition[1396]: INFO : Stage: umount Jan 23 17:56:21.930332 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:21.930332 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:56:21.930332 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:56:21.941503 ignition[1396]: INFO : PUT result: OK Jan 23 17:56:21.945310 ignition[1396]: INFO : umount: umount passed Jan 23 17:56:21.948025 ignition[1396]: INFO : Ignition finished successfully Jan 23 17:56:21.951735 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:56:21.956618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:56:21.963550 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:56:21.963892 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:56:21.978016 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:56:21.978357 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:56:21.985086 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:56:21.985812 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:56:21.989619 systemd[1]: Stopped target network.target - Network. Jan 23 17:56:21.992467 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:56:21.992588 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:56:21.998766 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:56:22.001453 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:56:22.003284 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:22.006536 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:56:22.010733 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:56:22.016559 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:56:22.016652 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:56:22.020018 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:56:22.020111 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:56:22.028157 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:56:22.028322 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:56:22.032750 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:56:22.032858 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:56:22.035554 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:56:22.035678 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:56:22.040664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:56:22.043649 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:56:22.072066 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:56:22.077092 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:56:22.109749 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:56:22.110606 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:56:22.110873 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:56:22.123791 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:56:22.126937 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:56:22.132708 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:56:22.132804 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:22.140006 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:56:22.148586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:56:22.149683 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:56:22.160900 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:56:22.161036 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:22.165409 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:56:22.165517 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:22.170558 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:56:22.170677 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:22.180048 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:22.191497 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:56:22.191645 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:56:22.219780 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:56:22.222942 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:22.226870 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:56:22.227301 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:56:22.237673 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:56:22.237817 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:22.244843 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:56:22.244938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:22.248227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:56:22.248359 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:56:22.255520 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:56:22.255647 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:56:22.263839 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:56:22.263989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:56:22.276702 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:56:22.279786 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:56:22.279935 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:22.291402 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:56:22.291521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:22.298120 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 17:56:22.298256 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:56:22.310484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:56:22.310736 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:22.318552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:56:22.318645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:22.329399 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 17:56:22.329531 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 17:56:22.329611 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 17:56:22.329700 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:56:22.346814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:56:22.347232 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:56:22.358506 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:56:22.364444 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:56:22.404366 systemd[1]: Switching root. Jan 23 17:56:22.458270 systemd-journald[259]: Journal stopped Jan 23 17:56:24.653989 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Jan 23 17:56:24.654137 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:56:24.657849 kernel: SELinux: policy capability open_perms=1 Jan 23 17:56:24.657909 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:56:24.657944 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:56:24.657977 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:56:24.658010 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:56:24.658053 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:56:24.658102 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:56:24.658138 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:56:24.658249 kernel: audit: type=1403 audit(1769190982.821:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:56:24.660910 systemd[1]: Successfully loaded SELinux policy in 87.136ms. Jan 23 17:56:24.660994 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.142ms. Jan 23 17:56:24.661033 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:56:24.661065 systemd[1]: Detected virtualization amazon. Jan 23 17:56:24.661097 systemd[1]: Detected architecture arm64. Jan 23 17:56:24.661129 systemd[1]: Detected first boot. Jan 23 17:56:24.661556 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:56:24.661620 zram_generator::config[1439]: No configuration found. Jan 23 17:56:24.661659 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:56:24.661693 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:56:24.661732 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:56:24.661767 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:56:24.661798 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:56:24.661831 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:24.661873 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:56:24.661908 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:56:24.661938 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:56:24.661967 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:56:24.662003 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:56:24.662036 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:56:24.662068 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:56:24.662097 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:56:24.662129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:24.668678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:24.668767 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:56:24.668804 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:56:24.668835 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:56:24.668870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:56:24.668904 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 17:56:24.668936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:24.668968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:24.669010 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:56:24.669040 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:56:24.669071 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:56:24.669104 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:56:24.669132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:24.669239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:56:24.669283 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:56:24.669320 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:56:24.669353 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:56:24.669394 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:56:24.669426 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:56:24.669457 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:24.669491 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:24.669520 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:24.669549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:56:24.669580 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:56:24.669609 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:56:24.669639 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:56:24.669675 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:56:24.669705 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:56:24.669734 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:56:24.669766 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:56:24.669798 systemd[1]: Reached target machines.target - Containers. Jan 23 17:56:24.669848 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:56:24.669887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:24.669916 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:56:24.669951 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:56:24.669981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:56:24.670012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:56:24.670040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:56:24.670068 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:56:24.670097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:56:24.670126 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:56:24.670157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:56:24.688823 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:56:24.688872 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:56:24.688903 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:56:24.688937 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:24.688971 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:56:24.689001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:56:24.689034 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:56:24.689062 kernel: loop: module loaded Jan 23 17:56:24.689094 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:56:24.689123 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:56:24.689158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:56:24.689239 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:56:24.689270 systemd[1]: Stopped verity-setup.service. Jan 23 17:56:24.689299 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:56:24.689333 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:56:24.689362 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:56:24.689390 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:56:24.689422 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:56:24.689451 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:56:24.689483 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:24.689521 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:56:24.689551 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:56:24.689579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:56:24.689608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:56:24.689637 kernel: fuse: init (API version 7.41) Jan 23 17:56:24.689665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:56:24.689694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:56:24.689722 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:56:24.689750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:56:24.689786 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:56:24.689893 systemd-journald[1518]: Collecting audit messages is disabled. Jan 23 17:56:24.689954 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:56:24.689985 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:56:24.690014 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:56:24.690060 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:56:24.690096 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:56:24.690136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:24.697224 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:56:24.697298 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:56:24.697334 systemd-journald[1518]: Journal started Jan 23 17:56:24.700633 systemd-journald[1518]: Runtime Journal (/run/log/journal/ec29bbda97382920a6394df04c9b063b) is 8M, max 75.3M, 67.3M free. Jan 23 17:56:24.030375 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:56:24.058977 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 17:56:24.059854 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:56:24.715380 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:56:24.715470 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:56:24.720934 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:56:24.736915 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:56:24.749642 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:56:24.754216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:24.763919 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:56:24.764020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:56:24.779138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:56:24.789787 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:56:24.807177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:56:24.807290 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:56:24.813548 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:24.820186 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:56:24.848192 kernel: ACPI: bus type drm_connector registered Jan 23 17:56:24.848905 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:56:24.850813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:56:24.870312 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Jan 23 17:56:24.870344 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Jan 23 17:56:24.893471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:56:24.899914 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:56:24.916453 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:56:24.931857 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:56:24.936393 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:56:24.944580 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:56:24.992207 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 17:56:25.053266 systemd-journald[1518]: Time spent on flushing to /var/log/journal/ec29bbda97382920a6394df04c9b063b is 94.363ms for 936 entries. Jan 23 17:56:25.053266 systemd-journald[1518]: System Journal (/var/log/journal/ec29bbda97382920a6394df04c9b063b) is 8M, max 195.6M, 187.6M free. Jan 23 17:56:25.161360 systemd-journald[1518]: Received client request to flush runtime journal. Jan 23 17:56:25.161490 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:56:25.090343 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:56:25.096081 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:56:25.099584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:25.104575 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:25.109337 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:56:25.132817 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:56:25.165878 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:56:25.174211 kernel: loop1: detected capacity change from 0 to 119840 Jan 23 17:56:25.239245 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:56:25.250635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:56:25.319218 kernel: loop2: detected capacity change from 0 to 61264 Jan 23 17:56:25.336437 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 23 17:56:25.336496 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 23 17:56:25.361513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:25.459212 kernel: loop3: detected capacity change from 0 to 207008 Jan 23 17:56:25.750742 ldconfig[1539]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:56:25.756298 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:56:25.800218 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 17:56:25.820203 kernel: loop5: detected capacity change from 0 to 119840 Jan 23 17:56:25.839233 kernel: loop6: detected capacity change from 0 to 61264 Jan 23 17:56:25.860234 kernel: loop7: detected capacity change from 0 to 207008 Jan 23 17:56:25.885662 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 17:56:25.889241 (sd-merge)[1604]: Merged extensions into '/usr'. Jan 23 17:56:25.897854 systemd[1]: Reload requested from client PID 1547 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:56:25.898109 systemd[1]: Reloading... Jan 23 17:56:26.000200 zram_generator::config[1629]: No configuration found. Jan 23 17:56:26.446331 systemd[1]: Reloading finished in 547 ms. Jan 23 17:56:26.490836 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:56:26.494499 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:56:26.513680 systemd[1]: Starting ensure-sysext.service... Jan 23 17:56:26.519443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:56:26.539462 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:26.567856 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:56:26.567883 systemd[1]: Reloading... Jan 23 17:56:26.602784 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:56:26.602866 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:56:26.603576 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:56:26.604139 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:56:26.610600 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:56:26.615578 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 23 17:56:26.616441 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 23 17:56:26.637450 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:56:26.637665 systemd-tmpfiles[1682]: Skipping /boot Jan 23 17:56:26.659733 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Jan 23 17:56:26.683766 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:56:26.685246 systemd-tmpfiles[1682]: Skipping /boot Jan 23 17:56:26.810200 zram_generator::config[1724]: No configuration found. Jan 23 17:56:27.106839 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:56:27.369636 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 17:56:27.372346 systemd[1]: Reloading finished in 803 ms. Jan 23 17:56:27.387977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:27.410371 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:27.443878 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:56:27.451094 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:56:27.458590 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:56:27.470607 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:56:27.482444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:56:27.489396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:56:27.508063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:27.514580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:56:27.526722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:56:27.536713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:56:27.539669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:27.539939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:27.552303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:27.552730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:27.552986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:27.563074 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:56:27.577284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:27.582519 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:56:27.585612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:27.585861 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:27.586239 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:56:27.611789 systemd[1]: Finished ensure-sysext.service. Jan 23 17:56:27.674319 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:56:27.678773 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:56:27.681023 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:56:27.682305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:56:27.716438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:56:27.719474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:56:27.724300 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:56:27.736289 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:56:27.753362 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:56:27.757083 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:56:27.819524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:56:27.829436 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:56:27.833011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:56:27.834586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:56:27.839643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:56:27.909935 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:56:27.915448 augenrules[1918]: No rules Jan 23 17:56:27.918891 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:56:27.919542 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:56:28.033583 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:28.121052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:56:28.132369 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:56:28.201155 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:56:28.251626 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:56:28.269464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:28.387347 systemd-networkd[1830]: lo: Link UP Jan 23 17:56:28.387372 systemd-networkd[1830]: lo: Gained carrier Jan 23 17:56:28.390482 systemd-networkd[1830]: Enumeration completed Jan 23 17:56:28.390670 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:56:28.392685 systemd-networkd[1830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:28.392694 systemd-networkd[1830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:56:28.396776 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:56:28.403596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:56:28.406822 systemd-networkd[1830]: eth0: Link UP Jan 23 17:56:28.407439 systemd-networkd[1830]: eth0: Gained carrier Jan 23 17:56:28.407499 systemd-networkd[1830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:28.409056 systemd-resolved[1832]: Positive Trust Anchors: Jan 23 17:56:28.411273 systemd-resolved[1832]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:56:28.411352 systemd-resolved[1832]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:56:28.420342 systemd-networkd[1830]: eth0: DHCPv4 address 172.31.17.161/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:56:28.435688 systemd-resolved[1832]: Defaulting to hostname 'linux'. Jan 23 17:56:28.438882 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:56:28.441849 systemd[1]: Reached target network.target - Network. Jan 23 17:56:28.444472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:28.447391 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:56:28.450133 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:56:28.453450 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:56:28.456778 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:56:28.459590 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:56:28.462671 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:56:28.465776 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:56:28.465844 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:56:28.468130 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:56:28.471982 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:56:28.477774 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:56:28.485652 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:56:28.488933 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:56:28.491826 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:56:28.506552 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:56:28.509738 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:56:28.514104 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:56:28.519213 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:56:28.522857 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:56:28.525386 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:56:28.527911 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:56:28.527971 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:56:28.530346 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:56:28.536512 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:56:28.543579 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:56:28.554458 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:56:28.563614 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:56:28.570915 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:56:28.576840 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:56:28.582670 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:56:28.590431 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:56:28.610701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:56:28.617213 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 17:56:28.630980 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:56:28.644365 jq[1969]: false Jan 23 17:56:28.652339 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:56:28.666466 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:56:28.670956 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:56:28.672459 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:56:28.680370 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:56:28.692094 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:56:28.704209 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:56:28.707670 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:56:28.708238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:56:28.712585 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:56:28.713123 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:56:28.786893 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:56:28.787442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:56:28.810447 jq[1982]: true Jan 23 17:56:28.822312 extend-filesystems[1970]: Found /dev/nvme0n1p6 Jan 23 17:56:28.847343 extend-filesystems[1970]: Found /dev/nvme0n1p9 Jan 23 17:56:28.874358 tar[1985]: linux-arm64/LICENSE Jan 23 17:56:28.874358 tar[1985]: linux-arm64/helm Jan 23 17:56:28.871845 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:56:28.871522 dbus-daemon[1967]: [system] SELinux support is enabled Jan 23 17:56:28.888335 extend-filesystems[1970]: Checking size of /dev/nvme0n1p9 Jan 23 17:56:28.880764 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:56:28.880839 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:56:28.884479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:56:28.884527 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:56:28.897290 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 17:56:28.916238 ntpd[1972]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: ---------------------------------------------------- Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: corporation. Support and training for ntp-4 are Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: available at https://www.nwtime.org/support Jan 23 17:56:28.921081 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: ---------------------------------------------------- Jan 23 17:56:28.919391 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:28.919413 ntpd[1972]: ---------------------------------------------------- Jan 23 17:56:28.919431 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:28.919448 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:28.919464 ntpd[1972]: corporation. Support and training for ntp-4 are Jan 23 17:56:28.919504 ntpd[1972]: available at https://www.nwtime.org/support Jan 23 17:56:28.919524 ntpd[1972]: ---------------------------------------------------- Jan 23 17:56:28.930931 (ntainerd)[2009]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:56:28.928914 ntpd[1972]: proto: precision = 0.096 usec (-23) Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: proto: precision = 0.096 usec (-23) Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: basedate set to 2026-01-11 Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: Listen normally on 3 eth0 172.31.17.161:123 Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: bind(21) AF_INET6 [fe80::4b1:79ff:fe5f:1f85%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:56:28.941527 ntpd[1972]: 23 Jan 17:56:28 ntpd[1972]: unable to create socket on eth0 (5) for [fe80::4b1:79ff:fe5f:1f85%2]:123 Jan 23 17:56:28.930554 ntpd[1972]: basedate set to 2026-01-11 Jan 23 17:56:28.943825 systemd-coredump[2020]: Process 1972 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 17:56:28.930590 ntpd[1972]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:28.930810 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:28.930864 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:28.931276 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:28.931345 ntpd[1972]: Listen normally on 3 eth0 172.31.17.161:123 Jan 23 17:56:28.931396 ntpd[1972]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:28.931449 ntpd[1972]: bind(21) AF_INET6 [fe80::4b1:79ff:fe5f:1f85%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:56:28.951676 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 17:56:28.931490 ntpd[1972]: unable to create socket on eth0 (5) for [fe80::4b1:79ff:fe5f:1f85%2]:123 Jan 23 17:56:28.939732 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1830 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 17:56:28.961585 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 17:56:28.973734 systemd[1]: Started systemd-coredump@0-2020-0.service - Process Core Dump (PID 2020/UID 0). Jan 23 17:56:29.004118 jq[2008]: true Jan 23 17:56:29.013894 extend-filesystems[1970]: Resized partition /dev/nvme0n1p9 Jan 23 17:56:29.033103 extend-filesystems[2026]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:56:29.050236 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 17:56:29.065707 coreos-metadata[1966]: Jan 23 17:56:29.065 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:56:29.078720 coreos-metadata[1966]: Jan 23 17:56:29.077 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 17:56:29.085877 coreos-metadata[1966]: Jan 23 17:56:29.084 INFO Fetch successful Jan 23 17:56:29.085877 coreos-metadata[1966]: Jan 23 17:56:29.084 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 17:56:29.086101 update_engine[1980]: I20260123 17:56:29.085644 1980 main.cc:92] Flatcar Update Engine starting Jan 23 17:56:29.090846 coreos-metadata[1966]: Jan 23 17:56:29.090 INFO Fetch successful Jan 23 17:56:29.090846 coreos-metadata[1966]: Jan 23 17:56:29.090 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 17:56:29.095815 coreos-metadata[1966]: Jan 23 17:56:29.095 INFO Fetch successful Jan 23 17:56:29.095815 coreos-metadata[1966]: Jan 23 17:56:29.095 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 17:56:29.099408 coreos-metadata[1966]: Jan 23 17:56:29.099 INFO Fetch successful Jan 23 17:56:29.099408 coreos-metadata[1966]: Jan 23 17:56:29.099 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 17:56:29.108210 coreos-metadata[1966]: Jan 23 17:56:29.107 INFO Fetch failed with 404: resource not found Jan 23 17:56:29.108210 coreos-metadata[1966]: Jan 23 17:56:29.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 17:56:29.110943 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:56:29.117535 coreos-metadata[1966]: Jan 23 17:56:29.117 INFO Fetch successful Jan 23 17:56:29.117535 coreos-metadata[1966]: Jan 23 17:56:29.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 17:56:29.118601 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:56:29.129296 update_engine[1980]: I20260123 17:56:29.126649 1980 update_check_scheduler.cc:74] Next update check in 2m24s Jan 23 17:56:29.131671 coreos-metadata[1966]: Jan 23 17:56:29.131 INFO Fetch successful Jan 23 17:56:29.131671 coreos-metadata[1966]: Jan 23 17:56:29.131 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 17:56:29.132848 coreos-metadata[1966]: Jan 23 17:56:29.132 INFO Fetch successful Jan 23 17:56:29.132848 coreos-metadata[1966]: Jan 23 17:56:29.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 17:56:29.134345 coreos-metadata[1966]: Jan 23 17:56:29.134 INFO Fetch successful Jan 23 17:56:29.134345 coreos-metadata[1966]: Jan 23 17:56:29.134 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 17:56:29.145298 coreos-metadata[1966]: Jan 23 17:56:29.141 INFO Fetch successful Jan 23 17:56:29.218131 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:56:29.221978 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:56:29.237358 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 17:56:29.262250 bash[2053]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:56:29.262499 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 17:56:29.262499 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 17:56:29.262499 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 17:56:29.267100 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:56:29.297744 extend-filesystems[1970]: Resized filesystem in /dev/nvme0n1p9 Jan 23 17:56:29.268724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:56:29.274111 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:56:29.288558 systemd[1]: Starting sshkeys.service... Jan 23 17:56:29.374964 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:56:29.382358 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:56:29.449114 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:56:29.482602 systemd-logind[1979]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:56:29.482646 systemd-logind[1979]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 17:56:29.490054 systemd-logind[1979]: New seat seat0. Jan 23 17:56:29.500641 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:56:29.629614 systemd-networkd[1830]: eth0: Gained IPv6LL Jan 23 17:56:29.643947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:56:29.649298 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:56:29.657296 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 17:56:29.664638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:29.678188 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:56:29.795643 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:56:29.897380 coreos-metadata[2061]: Jan 23 17:56:29.893 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:56:29.903219 coreos-metadata[2061]: Jan 23 17:56:29.899 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 17:56:29.903371 containerd[2009]: time="2026-01-23T17:56:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:56:29.910600 coreos-metadata[2061]: Jan 23 17:56:29.905 INFO Fetch successful Jan 23 17:56:29.910600 coreos-metadata[2061]: Jan 23 17:56:29.905 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:56:29.911376 containerd[2009]: time="2026-01-23T17:56:29.908158489Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:56:29.915252 coreos-metadata[2061]: Jan 23 17:56:29.914 INFO Fetch successful Jan 23 17:56:29.918333 unknown[2061]: wrote ssh authorized keys file for user: core Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.955663693Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.092µs" Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.955725745Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.955766245Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.956087473Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.956141833Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.956323309Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.956482729Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.956511325Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.956939221Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.956983069Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.957015217Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959109 containerd[2009]: time="2026-01-23T17:56:29.957037789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959726 containerd[2009]: time="2026-01-23T17:56:29.958476205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959726 containerd[2009]: time="2026-01-23T17:56:29.958926985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959726 containerd[2009]: time="2026-01-23T17:56:29.958996789Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:56:29.959726 containerd[2009]: time="2026-01-23T17:56:29.959022757Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:56:29.959726 containerd[2009]: time="2026-01-23T17:56:29.959123509Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:56:29.959936 containerd[2009]: time="2026-01-23T17:56:29.959836549Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:56:29.961308 containerd[2009]: time="2026-01-23T17:56:29.960144421Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968531017Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968662381Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968789785Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968829493Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968870893Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968899561Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968929501Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968959933Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.968992993Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.969020701Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.969046393Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.969077557Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.969431485Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:56:29.973510 containerd[2009]: time="2026-01-23T17:56:29.969481561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969519373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969548101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969577609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969604777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969631321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969657457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969696373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969723697Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.969749713Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.970145677Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.970210573Z" level=info msg="Start snapshots syncer" Jan 23 17:56:29.974202 containerd[2009]: time="2026-01-23T17:56:29.970285237Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:56:29.974706 containerd[2009]: time="2026-01-23T17:56:29.970991485Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:56:29.974706 containerd[2009]: time="2026-01-23T17:56:29.971114713Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971292997Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971772985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971825845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971853001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971883709Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971913793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971941225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.971967469Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.972039361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.972081553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.972117517Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.972222337Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.972335413Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:56:29.974923 containerd[2009]: time="2026-01-23T17:56:29.972364357Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972390325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972413053Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972438265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972463921Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972641581Z" level=info msg="runtime interface created" Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972659005Z" level=info msg="created NRI interface" Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972686017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972716497Z" level=info msg="Connect containerd service" Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.972760957Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:56:29.990929 containerd[2009]: time="2026-01-23T17:56:29.974340385Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:56:30.029959 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:56:30.107454 systemd-coredump[2022]: Process 1972 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1972: #0 0x0000aaaabfcd0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaabfc7fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaabfc80240 n/a (ntpd + 0x10240) #3 0x0000aaaabfc7be14 n/a (ntpd + 0xbe14) #4 0x0000aaaabfc7d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaabfc85a38 n/a (ntpd + 0x15a38) #6 0x0000aaaabfc7738c n/a (ntpd + 0x738c) #7 0x0000ffff838a2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff838a2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaabfc773f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 23 17:56:30.146712 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 17:56:30.147025 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 17:56:30.160258 systemd[1]: systemd-coredump@0-2020-0.service: Deactivated successfully. Jan 23 17:56:30.198338 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:56:30.198406 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:56:30.214788 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213233543Z" level=info msg="Start subscribing containerd event" Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213333983Z" level=info msg="Start recovering state" Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213486851Z" level=info msg="Start event monitor" Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213517523Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213538691Z" level=info msg="Start streaming server" Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213560387Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213578423Z" level=info msg="runtime interface starting up..." Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213593039Z" level=info msg="starting plugins..." Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.213622523Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.214376387Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.214498679Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:56:30.218445 containerd[2009]: time="2026-01-23T17:56:30.214615415Z" level=info msg="containerd successfully booted in 0.318630s" Jan 23 17:56:30.220154 systemd[1]: Finished sshkeys.service. Jan 23 17:56:30.232297 amazon-ssm-agent[2112]: Initializing new seelog logger Jan 23 17:56:30.232297 amazon-ssm-agent[2112]: New Seelog Logger Creation Complete Jan 23 17:56:30.232297 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.232297 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.232297 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 processing appconfig overrides Jan 23 17:56:30.238703 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.238703 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.238703 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 processing appconfig overrides Jan 23 17:56:30.238703 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.238703 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.238703 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 processing appconfig overrides Jan 23 17:56:30.244304 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.2362 INFO Proxy environment variables: Jan 23 17:56:30.249066 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 17:56:30.253861 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:30.260679 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:56:30.263861 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.263861 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.263861 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 processing appconfig overrides Jan 23 17:56:30.296898 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 17:56:30.343835 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.2362 INFO https_proxy: Jan 23 17:56:30.349392 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 17:56:30.375014 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 17:56:30.445311 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.2363 INFO http_proxy: Jan 23 17:56:30.479867 ntpd[2196]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:30.479985 ntpd[2196]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: ---------------------------------------------------- Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: corporation. Support and training for ntp-4 are Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: available at https://www.nwtime.org/support Jan 23 17:56:30.480475 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: ---------------------------------------------------- Jan 23 17:56:30.480004 ntpd[2196]: ---------------------------------------------------- Jan 23 17:56:30.480022 ntpd[2196]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:56:30.480038 ntpd[2196]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:56:30.480055 ntpd[2196]: corporation. Support and training for ntp-4 are Jan 23 17:56:30.480071 ntpd[2196]: available at https://www.nwtime.org/support Jan 23 17:56:30.480088 ntpd[2196]: ---------------------------------------------------- Jan 23 17:56:30.488379 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: proto: precision = 0.096 usec (-23) Jan 23 17:56:30.488379 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: basedate set to 2026-01-11 Jan 23 17:56:30.488379 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:30.488379 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:30.485766 ntpd[2196]: proto: precision = 0.096 usec (-23) Jan 23 17:56:30.486135 ntpd[2196]: basedate set to 2026-01-11 Jan 23 17:56:30.486192 ntpd[2196]: gps base set to 2026-01-11 (week 2401) Jan 23 17:56:30.486789 ntpd[2196]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:56:30.486967 ntpd[2196]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:30.489538 ntpd[2196]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:30.489803 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:56:30.489803 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:56:30.489803 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Listen normally on 3 eth0 172.31.17.161:123 Jan 23 17:56:30.489803 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:30.489803 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Listen normally on 5 eth0 [fe80::4b1:79ff:fe5f:1f85%2]:123 Jan 23 17:56:30.489803 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: Listening on routing socket on fd #22 for interface updates Jan 23 17:56:30.489585 ntpd[2196]: Listen normally on 3 eth0 172.31.17.161:123 Jan 23 17:56:30.489635 ntpd[2196]: Listen normally on 4 lo [::1]:123 Jan 23 17:56:30.489703 ntpd[2196]: Listen normally on 5 eth0 [fe80::4b1:79ff:fe5f:1f85%2]:123 Jan 23 17:56:30.489748 ntpd[2196]: Listening on routing socket on fd #22 for interface updates Jan 23 17:56:30.517771 ntpd[2196]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:30.518613 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:30.518613 ntpd[2196]: 23 Jan 17:56:30 ntpd[2196]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:30.517841 ntpd[2196]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:56:30.545289 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.2363 INFO no_proxy: Jan 23 17:56:30.646791 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.2365 INFO Checking if agent identity type OnPrem can be assumed Jan 23 17:56:30.746198 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.2366 INFO Checking if agent identity type EC2 can be assumed Jan 23 17:56:30.831057 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.831274 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:56:30.832930 polkitd[2208]: Started polkitd version 126 Jan 23 17:56:30.835069 amazon-ssm-agent[2112]: 2026/01/23 17:56:30 processing appconfig overrides Jan 23 17:56:30.844257 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5433 INFO Agent will take identity from EC2 Jan 23 17:56:30.844729 sshd_keygen[2012]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:56:30.859201 polkitd[2208]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 17:56:30.859882 polkitd[2208]: Loading rules from directory /run/polkit-1/rules.d Jan 23 17:56:30.859989 polkitd[2208]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:56:30.864784 polkitd[2208]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 17:56:30.864891 polkitd[2208]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:56:30.864993 polkitd[2208]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 17:56:30.868242 polkitd[2208]: Finished loading, compiling and executing 2 rules Jan 23 17:56:30.868740 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 17:56:30.878298 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 17:56:30.881936 polkitd[2208]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 17:56:30.891407 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5460 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 17:56:30.891596 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5462 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 17:56:30.892405 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5462 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 17:56:30.893156 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5462 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 17:56:30.893156 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5462 INFO [Registrar] Starting registrar module Jan 23 17:56:30.893156 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5486 INFO [EC2Identity] Checking disk for registration info Jan 23 17:56:30.893156 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5487 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 17:56:30.893156 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.5487 INFO [EC2Identity] Generating registration keypair Jan 23 17:56:30.893156 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.7764 INFO [EC2Identity] Checking write access before registering Jan 23 17:56:30.893156 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.7782 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 17:56:30.894658 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.8294 INFO [EC2Identity] EC2 registration was successful. Jan 23 17:56:30.895282 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.8294 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 17:56:30.895282 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.8296 INFO [CredentialRefresher] credentialRefresher has started Jan 23 17:56:30.895282 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.8296 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 17:56:30.895282 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.8886 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 17:56:30.895282 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.8912 INFO [CredentialRefresher] Credentials ready Jan 23 17:56:30.922910 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:56:30.936715 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:56:30.942260 amazon-ssm-agent[2112]: 2026-01-23 17:56:30.8956 INFO [CredentialRefresher] Next credential rotation will be in 29.9998841395 minutes Jan 23 17:56:30.942907 systemd[1]: Started sshd@0-172.31.17.161:22-68.220.241.50:49474.service - OpenSSH per-connection server daemon (68.220.241.50:49474). Jan 23 17:56:30.962427 systemd-resolved[1832]: System hostname changed to 'ip-172-31-17-161'. Jan 23 17:56:30.962428 systemd-hostnamed[2021]: Hostname set to (transient) Jan 23 17:56:31.001979 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:56:31.005309 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:56:31.015803 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:56:31.059966 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:56:31.071803 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:56:31.079430 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 17:56:31.082722 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:56:31.229484 tar[1985]: linux-arm64/README.md Jan 23 17:56:31.259817 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:56:31.545786 sshd[2230]: Accepted publickey for core from 68.220.241.50 port 49474 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:31.550426 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:31.566741 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:56:31.573278 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:56:31.605704 systemd-logind[1979]: New session 1 of user core. Jan 23 17:56:31.621667 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:56:31.632139 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:56:31.660356 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:56:31.665998 systemd-logind[1979]: New session c1 of user core. Jan 23 17:56:31.929348 amazon-ssm-agent[2112]: 2026-01-23 17:56:31.9291 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 17:56:32.002541 systemd[2246]: Queued start job for default target default.target. Jan 23 17:56:32.010356 systemd[2246]: Created slice app.slice - User Application Slice. Jan 23 17:56:32.010432 systemd[2246]: Reached target paths.target - Paths. Jan 23 17:56:32.010527 systemd[2246]: Reached target timers.target - Timers. Jan 23 17:56:32.015392 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:56:32.030707 amazon-ssm-agent[2112]: 2026-01-23 17:56:31.9342 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started Jan 23 17:56:32.041829 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:56:32.042099 systemd[2246]: Reached target sockets.target - Sockets. Jan 23 17:56:32.042252 systemd[2246]: Reached target basic.target - Basic System. Jan 23 17:56:32.042341 systemd[2246]: Reached target default.target - Main User Target. Jan 23 17:56:32.042402 systemd[2246]: Startup finished in 354ms. Jan 23 17:56:32.044241 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:56:32.057602 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:56:32.131296 amazon-ssm-agent[2112]: 2026-01-23 17:56:31.9343 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 17:56:32.433078 systemd[1]: Started sshd@1-172.31.17.161:22-68.220.241.50:49482.service - OpenSSH per-connection server daemon (68.220.241.50:49482). Jan 23 17:56:32.958088 sshd[2270]: Accepted publickey for core from 68.220.241.50 port 49482 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:32.960116 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:32.971776 systemd-logind[1979]: New session 2 of user core. Jan 23 17:56:32.981501 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:56:33.313805 sshd[2273]: Connection closed by 68.220.241.50 port 49482 Jan 23 17:56:33.315073 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:33.323272 systemd[1]: sshd@1-172.31.17.161:22-68.220.241.50:49482.service: Deactivated successfully. Jan 23 17:56:33.327198 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:56:33.329595 systemd-logind[1979]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:56:33.333026 systemd-logind[1979]: Removed session 2. Jan 23 17:56:33.410865 systemd[1]: Started sshd@2-172.31.17.161:22-68.220.241.50:57834.service - OpenSSH per-connection server daemon (68.220.241.50:57834). Jan 23 17:56:33.937792 sshd[2279]: Accepted publickey for core from 68.220.241.50 port 57834 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:33.943483 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:33.957280 systemd-logind[1979]: New session 3 of user core. Jan 23 17:56:33.962518 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:56:34.299786 sshd[2282]: Connection closed by 68.220.241.50 port 57834 Jan 23 17:56:34.300696 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:34.313145 systemd[1]: sshd@2-172.31.17.161:22-68.220.241.50:57834.service: Deactivated successfully. Jan 23 17:56:34.318459 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:56:34.322923 systemd-logind[1979]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:56:34.326929 systemd-logind[1979]: Removed session 3. Jan 23 17:56:34.733679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:34.737754 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:56:34.744360 systemd[1]: Startup finished in 3.746s (kernel) + 9.101s (initrd) + 12.009s (userspace) = 24.858s. Jan 23 17:56:34.751747 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:36.520307 kubelet[2292]: E0123 17:56:36.520203 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:36.524700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:36.525425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:36.526314 systemd[1]: kubelet.service: Consumed 1.509s CPU time, 258.5M memory peak. Jan 23 17:56:37.207905 systemd-resolved[1832]: Clock change detected. Flushing caches. Jan 23 17:56:44.117381 systemd[1]: Started sshd@3-172.31.17.161:22-68.220.241.50:60546.service - OpenSSH per-connection server daemon (68.220.241.50:60546). Jan 23 17:56:44.630656 sshd[2304]: Accepted publickey for core from 68.220.241.50 port 60546 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:44.632554 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:44.639909 systemd-logind[1979]: New session 4 of user core. Jan 23 17:56:44.651886 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:56:44.982048 sshd[2307]: Connection closed by 68.220.241.50 port 60546 Jan 23 17:56:44.982962 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:44.991057 systemd[1]: sshd@3-172.31.17.161:22-68.220.241.50:60546.service: Deactivated successfully. Jan 23 17:56:44.996137 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:56:44.997868 systemd-logind[1979]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:56:45.000576 systemd-logind[1979]: Removed session 4. Jan 23 17:56:45.076502 systemd[1]: Started sshd@4-172.31.17.161:22-68.220.241.50:60558.service - OpenSSH per-connection server daemon (68.220.241.50:60558). Jan 23 17:56:45.603777 sshd[2313]: Accepted publickey for core from 68.220.241.50 port 60558 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:45.606110 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:45.613623 systemd-logind[1979]: New session 5 of user core. Jan 23 17:56:45.629833 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:56:45.949349 sshd[2316]: Connection closed by 68.220.241.50 port 60558 Jan 23 17:56:45.950369 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:45.958194 systemd-logind[1979]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:56:45.958205 systemd[1]: sshd@4-172.31.17.161:22-68.220.241.50:60558.service: Deactivated successfully. Jan 23 17:56:45.963594 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:56:45.966424 systemd-logind[1979]: Removed session 5. Jan 23 17:56:46.038578 systemd[1]: Started sshd@5-172.31.17.161:22-68.220.241.50:60570.service - OpenSSH per-connection server daemon (68.220.241.50:60570). Jan 23 17:56:46.304023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:46.307793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:46.555472 sshd[2322]: Accepted publickey for core from 68.220.241.50 port 60570 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:46.558922 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:46.569294 systemd-logind[1979]: New session 6 of user core. Jan 23 17:56:46.576906 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:56:46.649969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:46.663079 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:46.730265 kubelet[2334]: E0123 17:56:46.730171 2334 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:46.737188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:46.737739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:46.738598 systemd[1]: kubelet.service: Consumed 308ms CPU time, 104.9M memory peak. Jan 23 17:56:46.907988 sshd[2328]: Connection closed by 68.220.241.50 port 60570 Jan 23 17:56:46.909896 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:46.916451 systemd[1]: sshd@5-172.31.17.161:22-68.220.241.50:60570.service: Deactivated successfully. Jan 23 17:56:46.921123 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:56:46.924729 systemd-logind[1979]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:56:46.927490 systemd-logind[1979]: Removed session 6. Jan 23 17:56:47.025506 systemd[1]: Started sshd@6-172.31.17.161:22-68.220.241.50:60572.service - OpenSSH per-connection server daemon (68.220.241.50:60572). Jan 23 17:56:47.594868 sshd[2346]: Accepted publickey for core from 68.220.241.50 port 60572 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:47.597039 sshd-session[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:47.604155 systemd-logind[1979]: New session 7 of user core. Jan 23 17:56:47.614053 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:56:47.909256 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:56:47.910155 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:47.924898 sudo[2350]: pam_unix(sudo:session): session closed for user root Jan 23 17:56:48.009547 sshd[2349]: Connection closed by 68.220.241.50 port 60572 Jan 23 17:56:48.010580 sshd-session[2346]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:48.018526 systemd[1]: sshd@6-172.31.17.161:22-68.220.241.50:60572.service: Deactivated successfully. Jan 23 17:56:48.021412 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:56:48.025019 systemd-logind[1979]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:56:48.027875 systemd-logind[1979]: Removed session 7. Jan 23 17:56:48.098720 systemd[1]: Started sshd@7-172.31.17.161:22-68.220.241.50:60582.service - OpenSSH per-connection server daemon (68.220.241.50:60582). Jan 23 17:56:48.627939 sshd[2356]: Accepted publickey for core from 68.220.241.50 port 60582 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:48.630321 sshd-session[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:48.639638 systemd-logind[1979]: New session 8 of user core. Jan 23 17:56:48.646888 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:56:48.908569 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:56:48.909294 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:48.917363 sudo[2361]: pam_unix(sudo:session): session closed for user root Jan 23 17:56:48.927022 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:56:48.928152 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:48.945481 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:56:49.015653 augenrules[2383]: No rules Jan 23 17:56:49.017805 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:56:49.018332 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:56:49.020491 sudo[2360]: pam_unix(sudo:session): session closed for user root Jan 23 17:56:49.098508 sshd[2359]: Connection closed by 68.220.241.50 port 60582 Jan 23 17:56:49.099308 sshd-session[2356]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:49.105692 systemd-logind[1979]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:56:49.105870 systemd[1]: sshd@7-172.31.17.161:22-68.220.241.50:60582.service: Deactivated successfully. Jan 23 17:56:49.109150 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:56:49.115676 systemd-logind[1979]: Removed session 8. Jan 23 17:56:49.191749 systemd[1]: Started sshd@8-172.31.17.161:22-68.220.241.50:60588.service - OpenSSH per-connection server daemon (68.220.241.50:60588). Jan 23 17:56:49.702338 sshd[2392]: Accepted publickey for core from 68.220.241.50 port 60588 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:56:49.704518 sshd-session[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:49.712015 systemd-logind[1979]: New session 9 of user core. Jan 23 17:56:49.725848 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:56:49.977938 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:56:49.978511 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:56:50.503567 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:56:50.519356 (dockerd)[2413]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:56:50.900766 dockerd[2413]: time="2026-01-23T17:56:50.900501700Z" level=info msg="Starting up" Jan 23 17:56:50.903153 dockerd[2413]: time="2026-01-23T17:56:50.903088660Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:56:50.923640 dockerd[2413]: time="2026-01-23T17:56:50.923439904Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:56:50.977310 dockerd[2413]: time="2026-01-23T17:56:50.976942612Z" level=info msg="Loading containers: start." Jan 23 17:56:50.989646 kernel: Initializing XFRM netlink socket Jan 23 17:56:51.310583 (udev-worker)[2433]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:56:51.388357 systemd-networkd[1830]: docker0: Link UP Jan 23 17:56:51.394859 dockerd[2413]: time="2026-01-23T17:56:51.394430450Z" level=info msg="Loading containers: done." Jan 23 17:56:51.419440 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck46017676-merged.mount: Deactivated successfully. Jan 23 17:56:51.420410 dockerd[2413]: time="2026-01-23T17:56:51.420340742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:56:51.420522 dockerd[2413]: time="2026-01-23T17:56:51.420458966Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:56:51.421765 dockerd[2413]: time="2026-01-23T17:56:51.421339094Z" level=info msg="Initializing buildkit" Jan 23 17:56:51.457950 dockerd[2413]: time="2026-01-23T17:56:51.457904138Z" level=info msg="Completed buildkit initialization" Jan 23 17:56:51.475374 dockerd[2413]: time="2026-01-23T17:56:51.475319654Z" level=info msg="Daemon has completed initialization" Jan 23 17:56:51.476397 dockerd[2413]: time="2026-01-23T17:56:51.475663070Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:56:51.475723 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:56:52.592406 containerd[2009]: time="2026-01-23T17:56:52.591755488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 17:56:53.218385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159170783.mount: Deactivated successfully. Jan 23 17:56:54.509692 containerd[2009]: time="2026-01-23T17:56:54.509585273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:54.511512 containerd[2009]: time="2026-01-23T17:56:54.511426589Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 17:56:54.512753 containerd[2009]: time="2026-01-23T17:56:54.512678105Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:54.517806 containerd[2009]: time="2026-01-23T17:56:54.517748489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:54.520031 containerd[2009]: time="2026-01-23T17:56:54.519752561Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.927933065s" Jan 23 17:56:54.520031 containerd[2009]: time="2026-01-23T17:56:54.519818585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 17:56:54.520779 containerd[2009]: time="2026-01-23T17:56:54.520717853Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 17:56:55.876095 containerd[2009]: time="2026-01-23T17:56:55.876011084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.878196 containerd[2009]: time="2026-01-23T17:56:55.877806260Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 17:56:55.879091 containerd[2009]: time="2026-01-23T17:56:55.879036824Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.883954 containerd[2009]: time="2026-01-23T17:56:55.883890704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.886120 containerd[2009]: time="2026-01-23T17:56:55.886071032Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.365291727s" Jan 23 17:56:55.886297 containerd[2009]: time="2026-01-23T17:56:55.886265792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 17:56:55.887556 containerd[2009]: time="2026-01-23T17:56:55.887268860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 17:56:56.804294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:56:56.808883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:57.192667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:57.199675 containerd[2009]: time="2026-01-23T17:56:57.198122143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:57.200196 containerd[2009]: time="2026-01-23T17:56:57.200076691Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 17:56:57.201709 containerd[2009]: time="2026-01-23T17:56:57.201656671Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:57.209281 containerd[2009]: time="2026-01-23T17:56:57.209218279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:57.210190 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:57.218162 containerd[2009]: time="2026-01-23T17:56:57.218095867Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.330769431s" Jan 23 17:56:57.218565 containerd[2009]: time="2026-01-23T17:56:57.218409655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 17:56:57.219899 containerd[2009]: time="2026-01-23T17:56:57.219842791Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 17:56:57.290909 kubelet[2696]: E0123 17:56:57.290818 2696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:57.295168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:57.295479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:57.296449 systemd[1]: kubelet.service: Consumed 329ms CPU time, 107M memory peak. Jan 23 17:56:58.532099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736006247.mount: Deactivated successfully. Jan 23 17:56:59.067132 containerd[2009]: time="2026-01-23T17:56:59.067075340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:59.068794 containerd[2009]: time="2026-01-23T17:56:59.068751932Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 17:56:59.069533 containerd[2009]: time="2026-01-23T17:56:59.069492752Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:59.074013 containerd[2009]: time="2026-01-23T17:56:59.072656420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:59.074013 containerd[2009]: time="2026-01-23T17:56:59.073821632Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.853759157s" Jan 23 17:56:59.074013 containerd[2009]: time="2026-01-23T17:56:59.073864268Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 17:56:59.074816 containerd[2009]: time="2026-01-23T17:56:59.074778008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 17:56:59.536532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764589218.mount: Deactivated successfully. Jan 23 17:57:00.603826 containerd[2009]: time="2026-01-23T17:57:00.603738804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:00.607483 containerd[2009]: time="2026-01-23T17:57:00.607416324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 17:57:00.608643 containerd[2009]: time="2026-01-23T17:57:00.608553540Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:00.615097 containerd[2009]: time="2026-01-23T17:57:00.615021324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:00.618832 containerd[2009]: time="2026-01-23T17:57:00.618758856Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.54364948s" Jan 23 17:57:00.618832 containerd[2009]: time="2026-01-23T17:57:00.618820356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 17:57:00.619466 containerd[2009]: time="2026-01-23T17:57:00.619404396Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:57:00.705359 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 17:57:01.073014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500250833.mount: Deactivated successfully. Jan 23 17:57:01.078416 containerd[2009]: time="2026-01-23T17:57:01.078353086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:01.079657 containerd[2009]: time="2026-01-23T17:57:01.079591522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 17:57:01.080390 containerd[2009]: time="2026-01-23T17:57:01.080324974Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:01.085119 containerd[2009]: time="2026-01-23T17:57:01.084981538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:01.086967 containerd[2009]: time="2026-01-23T17:57:01.086333902Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 466.870682ms" Jan 23 17:57:01.086967 containerd[2009]: time="2026-01-23T17:57:01.086390554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:57:01.087260 containerd[2009]: time="2026-01-23T17:57:01.087209398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 17:57:01.633415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480684181.mount: Deactivated successfully. Jan 23 17:57:03.629506 containerd[2009]: time="2026-01-23T17:57:03.629445351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:03.633962 containerd[2009]: time="2026-01-23T17:57:03.633909303Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 17:57:03.636028 containerd[2009]: time="2026-01-23T17:57:03.635978643Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:03.646988 containerd[2009]: time="2026-01-23T17:57:03.646890819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:03.651320 containerd[2009]: time="2026-01-23T17:57:03.651123711Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.563858093s" Jan 23 17:57:03.651320 containerd[2009]: time="2026-01-23T17:57:03.651179559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 17:57:07.303671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:57:07.308923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:07.653847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:07.666093 (kubelet)[2852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:07.744404 kubelet[2852]: E0123 17:57:07.744328 2852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:07.748565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:07.748916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:07.749857 systemd[1]: kubelet.service: Consumed 292ms CPU time, 105M memory peak. Jan 23 17:57:13.626360 update_engine[1980]: I20260123 17:57:13.625563 1980 update_attempter.cc:509] Updating boot flags... Jan 23 17:57:15.748582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:15.749521 systemd[1]: kubelet.service: Consumed 292ms CPU time, 105M memory peak. Jan 23 17:57:15.754158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:15.805336 systemd[1]: Reload requested from client PID 3137 ('systemctl') (unit session-9.scope)... Jan 23 17:57:15.805568 systemd[1]: Reloading... Jan 23 17:57:16.025676 zram_generator::config[3184]: No configuration found. Jan 23 17:57:16.485808 systemd[1]: Reloading finished in 679 ms. Jan 23 17:57:16.575580 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:57:16.575805 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:57:16.576331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:16.576415 systemd[1]: kubelet.service: Consumed 223ms CPU time, 94.9M memory peak. Jan 23 17:57:16.579321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:16.943962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:16.960141 (kubelet)[3244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:57:17.029001 kubelet[3244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:17.029535 kubelet[3244]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:57:17.029658 kubelet[3244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:17.030112 kubelet[3244]: I0123 17:57:17.029967 3244 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:57:18.633028 kubelet[3244]: I0123 17:57:18.632670 3244 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:57:18.635636 kubelet[3244]: I0123 17:57:18.633213 3244 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:57:18.635636 kubelet[3244]: I0123 17:57:18.634442 3244 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:57:18.677245 kubelet[3244]: E0123 17:57:18.677182 3244 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:18.679193 kubelet[3244]: I0123 17:57:18.679158 3244 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:57:18.692078 kubelet[3244]: I0123 17:57:18.692034 3244 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:57:18.697979 kubelet[3244]: I0123 17:57:18.697939 3244 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:57:18.698548 kubelet[3244]: I0123 17:57:18.698493 3244 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:57:18.698877 kubelet[3244]: I0123 17:57:18.698549 3244 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-161","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:57:18.699074 kubelet[3244]: I0123 17:57:18.699025 3244 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:57:18.699074 kubelet[3244]: I0123 17:57:18.699047 3244 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:57:18.699402 kubelet[3244]: I0123 17:57:18.699373 3244 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:18.706816 kubelet[3244]: I0123 17:57:18.706783 3244 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:57:18.707543 kubelet[3244]: I0123 17:57:18.707113 3244 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:57:18.707543 kubelet[3244]: I0123 17:57:18.707166 3244 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:57:18.707543 kubelet[3244]: I0123 17:57:18.707193 3244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:57:18.715581 kubelet[3244]: W0123 17:57:18.715500 3244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-161&limit=500&resourceVersion=0": dial tcp 172.31.17.161:6443: connect: connection refused Jan 23 17:57:18.715801 kubelet[3244]: E0123 17:57:18.715595 3244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-161&limit=500&resourceVersion=0\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:18.716813 kubelet[3244]: W0123 17:57:18.716418 3244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.161:6443: connect: connection refused Jan 23 17:57:18.716813 kubelet[3244]: E0123 17:57:18.716507 3244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:18.716813 kubelet[3244]: I0123 17:57:18.716673 3244 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:57:18.718404 kubelet[3244]: I0123 17:57:18.717751 3244 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:57:18.718404 kubelet[3244]: W0123 17:57:18.717982 3244 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:57:18.721243 kubelet[3244]: I0123 17:57:18.721193 3244 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:57:18.721408 kubelet[3244]: I0123 17:57:18.721261 3244 server.go:1287] "Started kubelet" Jan 23 17:57:18.734558 kubelet[3244]: I0123 17:57:18.734317 3244 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:57:18.737804 kubelet[3244]: I0123 17:57:18.737720 3244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:57:18.738643 kubelet[3244]: I0123 17:57:18.738462 3244 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:57:18.739157 kubelet[3244]: I0123 17:57:18.739093 3244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:57:18.742365 kubelet[3244]: E0123 17:57:18.741883 3244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.161:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-161.188d6de04ac87c52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-161,UID:ip-172-31-17-161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-161,},FirstTimestamp:2026-01-23 17:57:18.721227858 +0000 UTC m=+1.754604742,LastTimestamp:2026-01-23 17:57:18.721227858 +0000 UTC m=+1.754604742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-161,}" Jan 23 17:57:18.746669 kubelet[3244]: I0123 17:57:18.745264 3244 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:57:18.747376 kubelet[3244]: I0123 17:57:18.747336 3244 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:57:18.748753 kubelet[3244]: I0123 17:57:18.748709 3244 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:57:18.749260 kubelet[3244]: E0123 17:57:18.749217 3244 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-161\" not found" Jan 23 17:57:18.751372 kubelet[3244]: I0123 17:57:18.751317 3244 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:57:18.751516 kubelet[3244]: I0123 17:57:18.751441 3244 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:57:18.753433 kubelet[3244]: W0123 17:57:18.753341 3244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.161:6443: connect: connection refused Jan 23 17:57:18.753702 kubelet[3244]: E0123 17:57:18.753658 3244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:18.754214 kubelet[3244]: E0123 17:57:18.754147 3244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-161?timeout=10s\": dial tcp 172.31.17.161:6443: connect: connection refused" interval="200ms" Jan 23 17:57:18.756011 kubelet[3244]: I0123 17:57:18.755969 3244 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:57:18.756354 kubelet[3244]: I0123 17:57:18.756314 3244 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:57:18.759443 kubelet[3244]: E0123 17:57:18.759406 3244 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:57:18.760040 kubelet[3244]: I0123 17:57:18.760012 3244 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:57:18.776656 kubelet[3244]: I0123 17:57:18.776542 3244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:57:18.778784 kubelet[3244]: I0123 17:57:18.778706 3244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:57:18.778784 kubelet[3244]: I0123 17:57:18.778759 3244 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:57:18.778982 kubelet[3244]: I0123 17:57:18.778794 3244 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:57:18.778982 kubelet[3244]: I0123 17:57:18.778808 3244 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:57:18.778982 kubelet[3244]: E0123 17:57:18.778883 3244 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:57:18.793177 kubelet[3244]: W0123 17:57:18.793082 3244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.161:6443: connect: connection refused Jan 23 17:57:18.793474 kubelet[3244]: E0123 17:57:18.793181 3244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:18.805562 kubelet[3244]: I0123 17:57:18.805497 3244 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:57:18.805821 kubelet[3244]: I0123 17:57:18.805761 3244 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:57:18.805983 kubelet[3244]: I0123 17:57:18.805801 3244 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:18.808896 kubelet[3244]: I0123 17:57:18.808867 3244 policy_none.go:49] "None policy: Start" Jan 23 17:57:18.809060 kubelet[3244]: I0123 17:57:18.809042 3244 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:57:18.809238 kubelet[3244]: I0123 17:57:18.809134 3244 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:57:18.819373 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:57:18.838558 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:57:18.846435 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:57:18.850404 kubelet[3244]: E0123 17:57:18.850359 3244 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-161\" not found" Jan 23 17:57:18.862793 kubelet[3244]: I0123 17:57:18.862661 3244 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:57:18.863088 kubelet[3244]: I0123 17:57:18.862954 3244 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:57:18.863088 kubelet[3244]: I0123 17:57:18.862985 3244 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:57:18.866129 kubelet[3244]: I0123 17:57:18.865867 3244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:57:18.867281 kubelet[3244]: E0123 17:57:18.867249 3244 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:57:18.867502 kubelet[3244]: E0123 17:57:18.867481 3244 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-161\" not found" Jan 23 17:57:18.901064 systemd[1]: Created slice kubepods-burstable-poda9b954b228121f8dbf9c046a7717cf19.slice - libcontainer container kubepods-burstable-poda9b954b228121f8dbf9c046a7717cf19.slice. Jan 23 17:57:18.919644 kubelet[3244]: E0123 17:57:18.919183 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:18.929101 systemd[1]: Created slice kubepods-burstable-pod66cd67e0ccff790e14018a73ec4c433f.slice - libcontainer container kubepods-burstable-pod66cd67e0ccff790e14018a73ec4c433f.slice. Jan 23 17:57:18.935471 kubelet[3244]: E0123 17:57:18.934818 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:18.937255 systemd[1]: Created slice kubepods-burstable-pod8dcd17f8e688100e6584ea426bfdb135.slice - libcontainer container kubepods-burstable-pod8dcd17f8e688100e6584ea426bfdb135.slice. Jan 23 17:57:18.942025 kubelet[3244]: E0123 17:57:18.941698 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:18.955039 kubelet[3244]: E0123 17:57:18.954976 3244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-161?timeout=10s\": dial tcp 172.31.17.161:6443: connect: connection refused" interval="400ms" Jan 23 17:57:18.966024 kubelet[3244]: I0123 17:57:18.965965 3244 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-161" Jan 23 17:57:18.966733 kubelet[3244]: E0123 17:57:18.966688 3244 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.161:6443/api/v1/nodes\": dial tcp 172.31.17.161:6443: connect: connection refused" node="ip-172-31-17-161" Jan 23 17:57:19.052596 kubelet[3244]: I0123 17:57:19.052459 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:19.052770 kubelet[3244]: I0123 17:57:19.052706 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:19.052833 kubelet[3244]: I0123 17:57:19.052812 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66cd67e0ccff790e14018a73ec4c433f-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-161\" (UID: \"66cd67e0ccff790e14018a73ec4c433f\") " pod="kube-system/kube-scheduler-ip-172-31-17-161" Jan 23 17:57:19.052883 kubelet[3244]: I0123 17:57:19.052866 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dcd17f8e688100e6584ea426bfdb135-ca-certs\") pod \"kube-apiserver-ip-172-31-17-161\" (UID: \"8dcd17f8e688100e6584ea426bfdb135\") " pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:19.052938 kubelet[3244]: I0123 17:57:19.052912 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dcd17f8e688100e6584ea426bfdb135-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-161\" (UID: \"8dcd17f8e688100e6584ea426bfdb135\") " pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:19.052997 kubelet[3244]: I0123 17:57:19.052950 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:19.052997 kubelet[3244]: I0123 17:57:19.052989 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:19.053092 kubelet[3244]: I0123 17:57:19.053023 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:19.053092 kubelet[3244]: I0123 17:57:19.053057 3244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dcd17f8e688100e6584ea426bfdb135-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-161\" (UID: \"8dcd17f8e688100e6584ea426bfdb135\") " pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:19.170084 kubelet[3244]: I0123 17:57:19.169953 3244 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-161" Jan 23 17:57:19.171131 kubelet[3244]: E0123 17:57:19.171069 3244 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.161:6443/api/v1/nodes\": dial tcp 172.31.17.161:6443: connect: connection refused" node="ip-172-31-17-161" Jan 23 17:57:19.222856 containerd[2009]: time="2026-01-23T17:57:19.221746288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-161,Uid:a9b954b228121f8dbf9c046a7717cf19,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:19.241313 containerd[2009]: time="2026-01-23T17:57:19.240842452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-161,Uid:66cd67e0ccff790e14018a73ec4c433f,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:19.249829 containerd[2009]: time="2026-01-23T17:57:19.249211984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-161,Uid:8dcd17f8e688100e6584ea426bfdb135,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:19.264027 containerd[2009]: time="2026-01-23T17:57:19.263971120Z" level=info msg="connecting to shim b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1" address="unix:///run/containerd/s/4f0d9d6e3aa73bfa26451fd08e64d2465719da5e2cc788fbaac4fda8945419f1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:19.315968 containerd[2009]: time="2026-01-23T17:57:19.315895697Z" level=info msg="connecting to shim f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3" address="unix:///run/containerd/s/69a963a6b4d9a6b1e7a5c77872fa5d0f9a1da601d04c2388778d4dd534de72ba" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:19.320708 containerd[2009]: time="2026-01-23T17:57:19.320635685Z" level=info msg="connecting to shim fbd3416147fe1905f5078b060f8db823c039ac1b70c9c15cbd19c427a48414f3" address="unix:///run/containerd/s/c02bb6caa2b0fd073dfb2de29d47cf3b870d0f9ecdc701e94925af505c3c6529" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:19.356733 kubelet[3244]: E0123 17:57:19.356662 3244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-161?timeout=10s\": dial tcp 172.31.17.161:6443: connect: connection refused" interval="800ms" Jan 23 17:57:19.368972 systemd[1]: Started cri-containerd-b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1.scope - libcontainer container b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1. Jan 23 17:57:19.395343 systemd[1]: Started cri-containerd-fbd3416147fe1905f5078b060f8db823c039ac1b70c9c15cbd19c427a48414f3.scope - libcontainer container fbd3416147fe1905f5078b060f8db823c039ac1b70c9c15cbd19c427a48414f3. Jan 23 17:57:19.417021 systemd[1]: Started cri-containerd-f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3.scope - libcontainer container f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3. Jan 23 17:57:19.525265 containerd[2009]: time="2026-01-23T17:57:19.524050494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-161,Uid:a9b954b228121f8dbf9c046a7717cf19,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1\"" Jan 23 17:57:19.535567 containerd[2009]: time="2026-01-23T17:57:19.535481874Z" level=info msg="CreateContainer within sandbox \"b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:57:19.562361 containerd[2009]: time="2026-01-23T17:57:19.562108938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-161,Uid:8dcd17f8e688100e6584ea426bfdb135,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbd3416147fe1905f5078b060f8db823c039ac1b70c9c15cbd19c427a48414f3\"" Jan 23 17:57:19.569085 containerd[2009]: time="2026-01-23T17:57:19.568831194Z" level=info msg="Container 53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:19.571369 containerd[2009]: time="2026-01-23T17:57:19.570676026Z" level=info msg="CreateContainer within sandbox \"fbd3416147fe1905f5078b060f8db823c039ac1b70c9c15cbd19c427a48414f3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:57:19.576079 kubelet[3244]: I0123 17:57:19.576031 3244 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-161" Jan 23 17:57:19.576540 kubelet[3244]: E0123 17:57:19.576496 3244 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.161:6443/api/v1/nodes\": dial tcp 172.31.17.161:6443: connect: connection refused" node="ip-172-31-17-161" Jan 23 17:57:19.583981 containerd[2009]: time="2026-01-23T17:57:19.583899762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-161,Uid:66cd67e0ccff790e14018a73ec4c433f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3\"" Jan 23 17:57:19.585898 containerd[2009]: time="2026-01-23T17:57:19.585580998Z" level=info msg="CreateContainer within sandbox \"b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63\"" Jan 23 17:57:19.586924 containerd[2009]: time="2026-01-23T17:57:19.586835178Z" level=info msg="StartContainer for \"53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63\"" Jan 23 17:57:19.590322 containerd[2009]: time="2026-01-23T17:57:19.590264070Z" level=info msg="CreateContainer within sandbox \"f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:57:19.591631 containerd[2009]: time="2026-01-23T17:57:19.591457578Z" level=info msg="connecting to shim 53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63" address="unix:///run/containerd/s/4f0d9d6e3aa73bfa26451fd08e64d2465719da5e2cc788fbaac4fda8945419f1" protocol=ttrpc version=3 Jan 23 17:57:19.594769 containerd[2009]: time="2026-01-23T17:57:19.594528222Z" level=info msg="Container e57164f592d9f1868f6296fa537a3143cc3c363cdf3841b4d134e81857ba9346: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:19.607706 containerd[2009]: time="2026-01-23T17:57:19.607435842Z" level=info msg="Container b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:19.613996 containerd[2009]: time="2026-01-23T17:57:19.613919298Z" level=info msg="CreateContainer within sandbox \"fbd3416147fe1905f5078b060f8db823c039ac1b70c9c15cbd19c427a48414f3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e57164f592d9f1868f6296fa537a3143cc3c363cdf3841b4d134e81857ba9346\"" Jan 23 17:57:19.615139 containerd[2009]: time="2026-01-23T17:57:19.615052470Z" level=info msg="StartContainer for \"e57164f592d9f1868f6296fa537a3143cc3c363cdf3841b4d134e81857ba9346\"" Jan 23 17:57:19.618977 containerd[2009]: time="2026-01-23T17:57:19.618704106Z" level=info msg="connecting to shim e57164f592d9f1868f6296fa537a3143cc3c363cdf3841b4d134e81857ba9346" address="unix:///run/containerd/s/c02bb6caa2b0fd073dfb2de29d47cf3b870d0f9ecdc701e94925af505c3c6529" protocol=ttrpc version=3 Jan 23 17:57:19.631376 containerd[2009]: time="2026-01-23T17:57:19.631281810Z" level=info msg="CreateContainer within sandbox \"f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b\"" Jan 23 17:57:19.632272 containerd[2009]: time="2026-01-23T17:57:19.632210982Z" level=info msg="StartContainer for \"b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b\"" Jan 23 17:57:19.633937 systemd[1]: Started cri-containerd-53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63.scope - libcontainer container 53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63. Jan 23 17:57:19.636693 containerd[2009]: time="2026-01-23T17:57:19.636126498Z" level=info msg="connecting to shim b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b" address="unix:///run/containerd/s/69a963a6b4d9a6b1e7a5c77872fa5d0f9a1da601d04c2388778d4dd534de72ba" protocol=ttrpc version=3 Jan 23 17:57:19.637919 kubelet[3244]: W0123 17:57:19.637850 3244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.161:6443: connect: connection refused Jan 23 17:57:19.638996 kubelet[3244]: E0123 17:57:19.638479 3244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:19.681935 systemd[1]: Started cri-containerd-e57164f592d9f1868f6296fa537a3143cc3c363cdf3841b4d134e81857ba9346.scope - libcontainer container e57164f592d9f1868f6296fa537a3143cc3c363cdf3841b4d134e81857ba9346. Jan 23 17:57:19.702066 systemd[1]: Started cri-containerd-b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b.scope - libcontainer container b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b. Jan 23 17:57:19.825340 containerd[2009]: time="2026-01-23T17:57:19.825274843Z" level=info msg="StartContainer for \"e57164f592d9f1868f6296fa537a3143cc3c363cdf3841b4d134e81857ba9346\" returns successfully" Jan 23 17:57:19.846023 containerd[2009]: time="2026-01-23T17:57:19.845975911Z" level=info msg="StartContainer for \"53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63\" returns successfully" Jan 23 17:57:19.889358 kubelet[3244]: W0123 17:57:19.889223 3244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-161&limit=500&resourceVersion=0": dial tcp 172.31.17.161:6443: connect: connection refused Jan 23 17:57:19.889720 kubelet[3244]: E0123 17:57:19.889635 3244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-161&limit=500&resourceVersion=0\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:19.913790 containerd[2009]: time="2026-01-23T17:57:19.913645292Z" level=info msg="StartContainer for \"b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b\" returns successfully" Jan 23 17:57:19.985097 kubelet[3244]: W0123 17:57:19.984962 3244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.161:6443: connect: connection refused Jan 23 17:57:19.985097 kubelet[3244]: E0123 17:57:19.985058 3244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.161:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:20.381103 kubelet[3244]: I0123 17:57:20.380916 3244 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-161" Jan 23 17:57:20.854673 kubelet[3244]: E0123 17:57:20.854404 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:20.860039 kubelet[3244]: E0123 17:57:20.859986 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:20.866743 kubelet[3244]: E0123 17:57:20.866695 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:21.871423 kubelet[3244]: E0123 17:57:21.871357 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:21.871970 kubelet[3244]: E0123 17:57:21.871863 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:21.872219 kubelet[3244]: E0123 17:57:21.872171 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:22.873255 kubelet[3244]: E0123 17:57:22.873188 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:22.875016 kubelet[3244]: E0123 17:57:22.874962 3244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:23.573725 kubelet[3244]: E0123 17:57:23.573651 3244 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-161\" not found" node="ip-172-31-17-161" Jan 23 17:57:23.636361 kubelet[3244]: I0123 17:57:23.636276 3244 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-161" Jan 23 17:57:23.652053 kubelet[3244]: I0123 17:57:23.651997 3244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:23.717929 kubelet[3244]: I0123 17:57:23.717848 3244 apiserver.go:52] "Watching apiserver" Jan 23 17:57:23.722202 kubelet[3244]: E0123 17:57:23.722136 3244 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-161\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:23.722202 kubelet[3244]: I0123 17:57:23.722187 3244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-161" Jan 23 17:57:23.733508 kubelet[3244]: E0123 17:57:23.733437 3244 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-161\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-161" Jan 23 17:57:23.733508 kubelet[3244]: I0123 17:57:23.733491 3244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:23.736950 kubelet[3244]: E0123 17:57:23.736741 3244 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-161\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:23.751971 kubelet[3244]: I0123 17:57:23.751914 3244 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:57:24.073912 kubelet[3244]: I0123 17:57:24.073854 3244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-161" Jan 23 17:57:24.077480 kubelet[3244]: E0123 17:57:24.077423 3244 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-161\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-161" Jan 23 17:57:25.933077 systemd[1]: Reload requested from client PID 3511 ('systemctl') (unit session-9.scope)... Jan 23 17:57:25.933101 systemd[1]: Reloading... Jan 23 17:57:26.121665 zram_generator::config[3558]: No configuration found. Jan 23 17:57:26.370361 kubelet[3244]: I0123 17:57:26.369911 3244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:26.609881 systemd[1]: Reloading finished in 676 ms. Jan 23 17:57:26.673141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:26.691149 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:57:26.691690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:26.691789 systemd[1]: kubelet.service: Consumed 2.488s CPU time, 127.7M memory peak. Jan 23 17:57:26.695252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:27.030907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:27.056835 (kubelet)[3615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:57:27.154639 kubelet[3615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:27.154639 kubelet[3615]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:57:27.154639 kubelet[3615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:27.155171 kubelet[3615]: I0123 17:57:27.154775 3615 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:57:27.171108 kubelet[3615]: I0123 17:57:27.171019 3615 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:57:27.171108 kubelet[3615]: I0123 17:57:27.171094 3615 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:57:27.171824 kubelet[3615]: I0123 17:57:27.171780 3615 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:57:27.182759 kubelet[3615]: I0123 17:57:27.181689 3615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 17:57:27.188933 kubelet[3615]: I0123 17:57:27.188881 3615 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:57:27.189465 sudo[3629]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 17:57:27.190860 sudo[3629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 17:57:27.202356 kubelet[3615]: I0123 17:57:27.202305 3615 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:57:27.208139 kubelet[3615]: I0123 17:57:27.208082 3615 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:57:27.208673 kubelet[3615]: I0123 17:57:27.208595 3615 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:57:27.208990 kubelet[3615]: I0123 17:57:27.208671 3615 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-161","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:57:27.209157 kubelet[3615]: I0123 17:57:27.209001 3615 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:57:27.209157 kubelet[3615]: I0123 17:57:27.209022 3615 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:57:27.209157 kubelet[3615]: I0123 17:57:27.209097 3615 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:27.209857 kubelet[3615]: I0123 17:57:27.209350 3615 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:57:27.209857 kubelet[3615]: I0123 17:57:27.209423 3615 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:57:27.209857 kubelet[3615]: I0123 17:57:27.209467 3615 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:57:27.209857 kubelet[3615]: I0123 17:57:27.209503 3615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:57:27.212716 kubelet[3615]: I0123 17:57:27.212664 3615 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:57:27.213636 kubelet[3615]: I0123 17:57:27.213429 3615 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:57:27.216381 kubelet[3615]: I0123 17:57:27.216233 3615 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:57:27.216381 kubelet[3615]: I0123 17:57:27.216302 3615 server.go:1287] "Started kubelet" Jan 23 17:57:27.228034 kubelet[3615]: I0123 17:57:27.226284 3615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:57:27.236916 kubelet[3615]: I0123 17:57:27.236744 3615 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:57:27.245117 kubelet[3615]: I0123 17:57:27.245015 3615 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:57:27.257967 kubelet[3615]: I0123 17:57:27.257748 3615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:57:27.259060 kubelet[3615]: I0123 17:57:27.258950 3615 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:57:27.259411 kubelet[3615]: I0123 17:57:27.259368 3615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:57:27.277140 kubelet[3615]: I0123 17:57:27.273775 3615 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:57:27.277140 kubelet[3615]: E0123 17:57:27.274210 3615 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-161\" not found" Jan 23 17:57:27.277140 kubelet[3615]: I0123 17:57:27.276315 3615 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:57:27.277140 kubelet[3615]: I0123 17:57:27.276560 3615 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:57:27.351834 kubelet[3615]: I0123 17:57:27.351756 3615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:57:27.359969 kubelet[3615]: I0123 17:57:27.359721 3615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:57:27.359969 kubelet[3615]: I0123 17:57:27.359770 3615 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:57:27.359969 kubelet[3615]: I0123 17:57:27.359802 3615 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:57:27.359969 kubelet[3615]: I0123 17:57:27.359817 3615 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:57:27.359969 kubelet[3615]: E0123 17:57:27.359888 3615 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:57:27.375518 kubelet[3615]: E0123 17:57:27.374389 3615 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-161\" not found" Jan 23 17:57:27.383638 kubelet[3615]: I0123 17:57:27.383560 3615 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:57:27.383638 kubelet[3615]: I0123 17:57:27.383598 3615 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:57:27.383837 kubelet[3615]: I0123 17:57:27.383794 3615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:57:27.401399 kubelet[3615]: E0123 17:57:27.401308 3615 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:57:27.461386 kubelet[3615]: E0123 17:57:27.461317 3615 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.532284 3615 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.532330 3615 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.532367 3615 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.533262 3615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.533293 3615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.533331 3615 policy_none.go:49] "None policy: Start" Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.533350 3615 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:57:27.534108 kubelet[3615]: I0123 17:57:27.533394 3615 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:57:27.534559 kubelet[3615]: I0123 17:57:27.534457 3615 state_mem.go:75] "Updated machine memory state" Jan 23 17:57:27.553817 kubelet[3615]: I0123 17:57:27.553470 3615 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:57:27.557091 kubelet[3615]: I0123 17:57:27.554865 3615 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:57:27.557091 kubelet[3615]: I0123 17:57:27.554906 3615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:57:27.560345 kubelet[3615]: I0123 17:57:27.560295 3615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:57:27.570635 kubelet[3615]: E0123 17:57:27.569436 3615 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:57:27.664911 kubelet[3615]: I0123 17:57:27.664354 3615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:27.666056 kubelet[3615]: I0123 17:57:27.665350 3615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:27.666056 kubelet[3615]: I0123 17:57:27.665401 3615 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-161" Jan 23 17:57:27.691483 kubelet[3615]: I0123 17:57:27.688823 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:27.691483 kubelet[3615]: I0123 17:57:27.688979 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:27.691483 kubelet[3615]: I0123 17:57:27.689250 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66cd67e0ccff790e14018a73ec4c433f-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-161\" (UID: \"66cd67e0ccff790e14018a73ec4c433f\") " pod="kube-system/kube-scheduler-ip-172-31-17-161" Jan 23 17:57:27.691483 kubelet[3615]: I0123 17:57:27.689337 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dcd17f8e688100e6584ea426bfdb135-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-161\" (UID: \"8dcd17f8e688100e6584ea426bfdb135\") " pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:27.691483 kubelet[3615]: I0123 17:57:27.689396 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dcd17f8e688100e6584ea426bfdb135-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-161\" (UID: \"8dcd17f8e688100e6584ea426bfdb135\") " pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:27.691883 kubelet[3615]: I0123 17:57:27.689438 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:27.691883 kubelet[3615]: I0123 17:57:27.689476 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:27.691883 kubelet[3615]: I0123 17:57:27.689512 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9b954b228121f8dbf9c046a7717cf19-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-161\" (UID: \"a9b954b228121f8dbf9c046a7717cf19\") " pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:27.691883 kubelet[3615]: I0123 17:57:27.689549 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dcd17f8e688100e6584ea426bfdb135-ca-certs\") pod \"kube-apiserver-ip-172-31-17-161\" (UID: \"8dcd17f8e688100e6584ea426bfdb135\") " pod="kube-system/kube-apiserver-ip-172-31-17-161" Jan 23 17:57:27.693495 kubelet[3615]: E0123 17:57:27.693421 3615 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-161\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-161" Jan 23 17:57:27.706708 kubelet[3615]: I0123 17:57:27.706642 3615 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-161" Jan 23 17:57:27.740023 kubelet[3615]: I0123 17:57:27.739965 3615 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-161" Jan 23 17:57:27.740171 kubelet[3615]: I0123 17:57:27.740084 3615 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-161" Jan 23 17:57:28.064449 sudo[3629]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:28.227962 kubelet[3615]: I0123 17:57:28.227859 3615 apiserver.go:52] "Watching apiserver" Jan 23 17:57:28.276892 kubelet[3615]: I0123 17:57:28.276817 3615 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:57:28.509430 kubelet[3615]: I0123 17:57:28.509225 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-161" podStartSLOduration=1.5092011379999999 podStartE2EDuration="1.509201138s" podCreationTimestamp="2026-01-23 17:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:28.506733446 +0000 UTC m=+1.443709352" watchObservedRunningTime="2026-01-23 17:57:28.509201138 +0000 UTC m=+1.446177020" Jan 23 17:57:28.510083 kubelet[3615]: I0123 17:57:28.509988 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-161" podStartSLOduration=2.509962262 podStartE2EDuration="2.509962262s" podCreationTimestamp="2026-01-23 17:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:28.48822977 +0000 UTC m=+1.425205664" watchObservedRunningTime="2026-01-23 17:57:28.509962262 +0000 UTC m=+1.446938216" Jan 23 17:57:28.552994 kubelet[3615]: I0123 17:57:28.552910 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-161" podStartSLOduration=1.552885543 podStartE2EDuration="1.552885543s" podCreationTimestamp="2026-01-23 17:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:28.531149498 +0000 UTC m=+1.468125380" watchObservedRunningTime="2026-01-23 17:57:28.552885543 +0000 UTC m=+1.489861413" Jan 23 17:57:31.225807 sudo[2396]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:31.304146 sshd[2395]: Connection closed by 68.220.241.50 port 60588 Jan 23 17:57:31.303262 sshd-session[2392]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:31.309775 systemd-logind[1979]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:57:31.310597 systemd[1]: sshd@8-172.31.17.161:22-68.220.241.50:60588.service: Deactivated successfully. Jan 23 17:57:31.315300 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:57:31.316033 systemd[1]: session-9.scope: Consumed 15.654s CPU time, 263M memory peak. Jan 23 17:57:31.322280 systemd-logind[1979]: Removed session 9. Jan 23 17:57:32.304228 kubelet[3615]: I0123 17:57:32.304191 3615 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:57:32.305576 containerd[2009]: time="2026-01-23T17:57:32.305505569Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:57:32.306651 kubelet[3615]: I0123 17:57:32.305886 3615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:57:32.995515 systemd[1]: Created slice kubepods-besteffort-pod616f5e85_dad5_421d_b130_deadcd7061da.slice - libcontainer container kubepods-besteffort-pod616f5e85_dad5_421d_b130_deadcd7061da.slice. Jan 23 17:57:33.025398 systemd[1]: Created slice kubepods-burstable-poddc9ec746_c5e8_4a11_9a02_9d7456ede611.slice - libcontainer container kubepods-burstable-poddc9ec746_c5e8_4a11_9a02_9d7456ede611.slice. Jan 23 17:57:33.029175 kubelet[3615]: I0123 17:57:33.026784 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/616f5e85-dad5-421d-b130-deadcd7061da-xtables-lock\") pod \"kube-proxy-zljqw\" (UID: \"616f5e85-dad5-421d-b130-deadcd7061da\") " pod="kube-system/kube-proxy-zljqw" Jan 23 17:57:33.029175 kubelet[3615]: I0123 17:57:33.026845 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/616f5e85-dad5-421d-b130-deadcd7061da-lib-modules\") pod \"kube-proxy-zljqw\" (UID: \"616f5e85-dad5-421d-b130-deadcd7061da\") " pod="kube-system/kube-proxy-zljqw" Jan 23 17:57:33.029175 kubelet[3615]: I0123 17:57:33.026884 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hostproc\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029175 kubelet[3615]: I0123 17:57:33.026925 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-config-path\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029175 kubelet[3615]: I0123 17:57:33.026959 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r84h\" (UniqueName: \"kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-kube-api-access-4r84h\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029175 kubelet[3615]: I0123 17:57:33.027001 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-lib-modules\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029580 kubelet[3615]: I0123 17:57:33.027036 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/616f5e85-dad5-421d-b130-deadcd7061da-kube-proxy\") pod \"kube-proxy-zljqw\" (UID: \"616f5e85-dad5-421d-b130-deadcd7061da\") " pod="kube-system/kube-proxy-zljqw" Jan 23 17:57:33.029580 kubelet[3615]: I0123 17:57:33.027073 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-bpf-maps\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029580 kubelet[3615]: I0123 17:57:33.027127 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-etc-cni-netd\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029580 kubelet[3615]: I0123 17:57:33.027167 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc9ec746-c5e8-4a11-9a02-9d7456ede611-clustermesh-secrets\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029580 kubelet[3615]: I0123 17:57:33.027205 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghg4\" (UniqueName: \"kubernetes.io/projected/616f5e85-dad5-421d-b130-deadcd7061da-kube-api-access-gghg4\") pod \"kube-proxy-zljqw\" (UID: \"616f5e85-dad5-421d-b130-deadcd7061da\") " pod="kube-system/kube-proxy-zljqw" Jan 23 17:57:33.029580 kubelet[3615]: I0123 17:57:33.027248 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hubble-tls\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029910 kubelet[3615]: I0123 17:57:33.027288 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cni-path\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029910 kubelet[3615]: I0123 17:57:33.027335 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-xtables-lock\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029910 kubelet[3615]: I0123 17:57:33.027368 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-cgroup\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029910 kubelet[3615]: I0123 17:57:33.027402 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-run\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029910 kubelet[3615]: I0123 17:57:33.027441 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-net\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.029910 kubelet[3615]: I0123 17:57:33.027481 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-kernel\") pod \"cilium-clg9v\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " pod="kube-system/cilium-clg9v" Jan 23 17:57:33.278082 kubelet[3615]: I0123 17:57:33.277927 3615 status_manager.go:890] "Failed to get status for pod" podUID="b384ef83-9e1d-4367-8ae2-52bd56f6de81" pod="kube-system/cilium-operator-6c4d7847fc-467sv" err="pods \"cilium-operator-6c4d7847fc-467sv\" is forbidden: User \"system:node:ip-172-31-17-161\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-161' and this object" Jan 23 17:57:33.287498 systemd[1]: Created slice kubepods-besteffort-podb384ef83_9e1d_4367_8ae2_52bd56f6de81.slice - libcontainer container kubepods-besteffort-podb384ef83_9e1d_4367_8ae2_52bd56f6de81.slice. Jan 23 17:57:33.312574 containerd[2009]: time="2026-01-23T17:57:33.312524598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zljqw,Uid:616f5e85-dad5-421d-b130-deadcd7061da,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:33.332066 kubelet[3615]: I0123 17:57:33.331976 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b384ef83-9e1d-4367-8ae2-52bd56f6de81-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-467sv\" (UID: \"b384ef83-9e1d-4367-8ae2-52bd56f6de81\") " pod="kube-system/cilium-operator-6c4d7847fc-467sv" Jan 23 17:57:33.332066 kubelet[3615]: I0123 17:57:33.332048 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-strwt\" (UniqueName: \"kubernetes.io/projected/b384ef83-9e1d-4367-8ae2-52bd56f6de81-kube-api-access-strwt\") pod \"cilium-operator-6c4d7847fc-467sv\" (UID: \"b384ef83-9e1d-4367-8ae2-52bd56f6de81\") " pod="kube-system/cilium-operator-6c4d7847fc-467sv" Jan 23 17:57:33.344068 containerd[2009]: time="2026-01-23T17:57:33.343907346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clg9v,Uid:dc9ec746-c5e8-4a11-9a02-9d7456ede611,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:33.372701 containerd[2009]: time="2026-01-23T17:57:33.372450834Z" level=info msg="connecting to shim b326bc721c6e8fa95942dc2bbea0933c1cd67260ecaf384cbe50143752b6f6a7" address="unix:///run/containerd/s/7dfe83b492549f8900009b312d1c777875fd59d177a040aee673cf09b830f7ad" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:33.396052 containerd[2009]: time="2026-01-23T17:57:33.395926423Z" level=info msg="connecting to shim e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa" address="unix:///run/containerd/s/650cbeef16dda4f65e8099478081bc53c7a391339923470603f5365f925474aa" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:33.422925 systemd[1]: Started cri-containerd-b326bc721c6e8fa95942dc2bbea0933c1cd67260ecaf384cbe50143752b6f6a7.scope - libcontainer container b326bc721c6e8fa95942dc2bbea0933c1cd67260ecaf384cbe50143752b6f6a7. Jan 23 17:57:33.498011 systemd[1]: Started cri-containerd-e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa.scope - libcontainer container e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa. Jan 23 17:57:33.600590 containerd[2009]: time="2026-01-23T17:57:33.600521432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-467sv,Uid:b384ef83-9e1d-4367-8ae2-52bd56f6de81,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:33.634459 containerd[2009]: time="2026-01-23T17:57:33.634232840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-clg9v,Uid:dc9ec746-c5e8-4a11-9a02-9d7456ede611,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\"" Jan 23 17:57:33.646823 containerd[2009]: time="2026-01-23T17:57:33.646539620Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 17:57:33.658145 containerd[2009]: time="2026-01-23T17:57:33.658087148Z" level=info msg="connecting to shim dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680" address="unix:///run/containerd/s/e005ce83c52622954ea3c92e0b04f4f3273330923e53dd4f7b13e809bdb5003f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:33.666339 containerd[2009]: time="2026-01-23T17:57:33.666260840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zljqw,Uid:616f5e85-dad5-421d-b130-deadcd7061da,Namespace:kube-system,Attempt:0,} returns sandbox id \"b326bc721c6e8fa95942dc2bbea0933c1cd67260ecaf384cbe50143752b6f6a7\"" Jan 23 17:57:33.675298 containerd[2009]: time="2026-01-23T17:57:33.675244268Z" level=info msg="CreateContainer within sandbox \"b326bc721c6e8fa95942dc2bbea0933c1cd67260ecaf384cbe50143752b6f6a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:57:33.699314 containerd[2009]: time="2026-01-23T17:57:33.699253184Z" level=info msg="Container 3fa9b58a956757cb73400b4b5555325274de6864f76d804a0abfd3762e178300: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:33.726981 systemd[1]: Started cri-containerd-dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680.scope - libcontainer container dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680. Jan 23 17:57:33.728524 containerd[2009]: time="2026-01-23T17:57:33.728155604Z" level=info msg="CreateContainer within sandbox \"b326bc721c6e8fa95942dc2bbea0933c1cd67260ecaf384cbe50143752b6f6a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3fa9b58a956757cb73400b4b5555325274de6864f76d804a0abfd3762e178300\"" Jan 23 17:57:33.732081 containerd[2009]: time="2026-01-23T17:57:33.732034448Z" level=info msg="StartContainer for \"3fa9b58a956757cb73400b4b5555325274de6864f76d804a0abfd3762e178300\"" Jan 23 17:57:33.737596 containerd[2009]: time="2026-01-23T17:57:33.737479328Z" level=info msg="connecting to shim 3fa9b58a956757cb73400b4b5555325274de6864f76d804a0abfd3762e178300" address="unix:///run/containerd/s/7dfe83b492549f8900009b312d1c777875fd59d177a040aee673cf09b830f7ad" protocol=ttrpc version=3 Jan 23 17:57:33.779138 systemd[1]: Started cri-containerd-3fa9b58a956757cb73400b4b5555325274de6864f76d804a0abfd3762e178300.scope - libcontainer container 3fa9b58a956757cb73400b4b5555325274de6864f76d804a0abfd3762e178300. Jan 23 17:57:33.835499 containerd[2009]: time="2026-01-23T17:57:33.835409313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-467sv,Uid:b384ef83-9e1d-4367-8ae2-52bd56f6de81,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\"" Jan 23 17:57:33.909285 containerd[2009]: time="2026-01-23T17:57:33.908949141Z" level=info msg="StartContainer for \"3fa9b58a956757cb73400b4b5555325274de6864f76d804a0abfd3762e178300\" returns successfully" Jan 23 17:57:34.526499 kubelet[3615]: I0123 17:57:34.525820 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zljqw" podStartSLOduration=2.525580952 podStartE2EDuration="2.525580952s" podCreationTimestamp="2026-01-23 17:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:34.524486504 +0000 UTC m=+7.461462566" watchObservedRunningTime="2026-01-23 17:57:34.525580952 +0000 UTC m=+7.462556822" Jan 23 17:57:39.724314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115550.mount: Deactivated successfully. Jan 23 17:57:42.305923 containerd[2009]: time="2026-01-23T17:57:42.305847483Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:42.308759 containerd[2009]: time="2026-01-23T17:57:42.308697603Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 17:57:42.310667 containerd[2009]: time="2026-01-23T17:57:42.310553499Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:42.315667 containerd[2009]: time="2026-01-23T17:57:42.315570231Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.668740715s" Jan 23 17:57:42.316026 containerd[2009]: time="2026-01-23T17:57:42.315668271Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 17:57:42.318183 containerd[2009]: time="2026-01-23T17:57:42.318000531Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 17:57:42.320072 containerd[2009]: time="2026-01-23T17:57:42.319994703Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 17:57:42.342406 containerd[2009]: time="2026-01-23T17:57:42.342238899Z" level=info msg="Container e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:42.347146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257703958.mount: Deactivated successfully. Jan 23 17:57:42.359384 containerd[2009]: time="2026-01-23T17:57:42.359321451Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\"" Jan 23 17:57:42.362860 containerd[2009]: time="2026-01-23T17:57:42.361874883Z" level=info msg="StartContainer for \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\"" Jan 23 17:57:42.365086 containerd[2009]: time="2026-01-23T17:57:42.365030655Z" level=info msg="connecting to shim e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e" address="unix:///run/containerd/s/650cbeef16dda4f65e8099478081bc53c7a391339923470603f5365f925474aa" protocol=ttrpc version=3 Jan 23 17:57:42.403911 systemd[1]: Started cri-containerd-e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e.scope - libcontainer container e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e. Jan 23 17:57:42.469444 containerd[2009]: time="2026-01-23T17:57:42.469290928Z" level=info msg="StartContainer for \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\" returns successfully" Jan 23 17:57:42.497856 systemd[1]: cri-containerd-e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e.scope: Deactivated successfully. Jan 23 17:57:42.505475 containerd[2009]: time="2026-01-23T17:57:42.505359880Z" level=info msg="received container exit event container_id:\"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\" id:\"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\" pid:4026 exited_at:{seconds:1769191062 nanos:504195736}" Jan 23 17:57:43.338133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e-rootfs.mount: Deactivated successfully. Jan 23 17:57:43.552852 containerd[2009]: time="2026-01-23T17:57:43.552773801Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 17:57:43.576954 containerd[2009]: time="2026-01-23T17:57:43.576887669Z" level=info msg="Container 6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:43.598718 containerd[2009]: time="2026-01-23T17:57:43.598512149Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\"" Jan 23 17:57:43.600591 containerd[2009]: time="2026-01-23T17:57:43.600465833Z" level=info msg="StartContainer for \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\"" Jan 23 17:57:43.605307 containerd[2009]: time="2026-01-23T17:57:43.605232161Z" level=info msg="connecting to shim 6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2" address="unix:///run/containerd/s/650cbeef16dda4f65e8099478081bc53c7a391339923470603f5365f925474aa" protocol=ttrpc version=3 Jan 23 17:57:43.659906 systemd[1]: Started cri-containerd-6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2.scope - libcontainer container 6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2. Jan 23 17:57:43.734067 containerd[2009]: time="2026-01-23T17:57:43.734002302Z" level=info msg="StartContainer for \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\" returns successfully" Jan 23 17:57:43.758255 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:57:43.758797 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:43.760121 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:43.764881 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:43.770245 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:57:43.774653 systemd[1]: cri-containerd-6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2.scope: Deactivated successfully. Jan 23 17:57:43.785693 containerd[2009]: time="2026-01-23T17:57:43.785113878Z" level=info msg="received container exit event container_id:\"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\" id:\"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\" pid:4072 exited_at:{seconds:1769191063 nanos:782778954}" Jan 23 17:57:43.814032 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:44.340424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2-rootfs.mount: Deactivated successfully. Jan 23 17:57:44.564267 containerd[2009]: time="2026-01-23T17:57:44.564205698Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 17:57:44.593245 containerd[2009]: time="2026-01-23T17:57:44.589505634Z" level=info msg="Container fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:44.598040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789545077.mount: Deactivated successfully. Jan 23 17:57:44.619006 containerd[2009]: time="2026-01-23T17:57:44.618942114Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\"" Jan 23 17:57:44.620530 containerd[2009]: time="2026-01-23T17:57:44.620411610Z" level=info msg="StartContainer for \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\"" Jan 23 17:57:44.632287 containerd[2009]: time="2026-01-23T17:57:44.632205678Z" level=info msg="connecting to shim fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c" address="unix:///run/containerd/s/650cbeef16dda4f65e8099478081bc53c7a391339923470603f5365f925474aa" protocol=ttrpc version=3 Jan 23 17:57:44.683021 systemd[1]: Started cri-containerd-fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c.scope - libcontainer container fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c. Jan 23 17:57:44.745653 containerd[2009]: time="2026-01-23T17:57:44.744525271Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:44.748298 containerd[2009]: time="2026-01-23T17:57:44.748248847Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 17:57:44.750399 containerd[2009]: time="2026-01-23T17:57:44.750351331Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:44.754174 containerd[2009]: time="2026-01-23T17:57:44.754109011Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.436048456s" Jan 23 17:57:44.754365 containerd[2009]: time="2026-01-23T17:57:44.754336015Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 17:57:44.759050 containerd[2009]: time="2026-01-23T17:57:44.758990155Z" level=info msg="CreateContainer within sandbox \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 17:57:44.784954 containerd[2009]: time="2026-01-23T17:57:44.784336303Z" level=info msg="Container ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:44.806695 containerd[2009]: time="2026-01-23T17:57:44.805096219Z" level=info msg="CreateContainer within sandbox \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\"" Jan 23 17:57:44.812668 containerd[2009]: time="2026-01-23T17:57:44.812498827Z" level=info msg="StartContainer for \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\"" Jan 23 17:57:44.820945 containerd[2009]: time="2026-01-23T17:57:44.820881463Z" level=info msg="connecting to shim ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14" address="unix:///run/containerd/s/e005ce83c52622954ea3c92e0b04f4f3273330923e53dd4f7b13e809bdb5003f" protocol=ttrpc version=3 Jan 23 17:57:44.849454 containerd[2009]: time="2026-01-23T17:57:44.848581795Z" level=info msg="StartContainer for \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\" returns successfully" Jan 23 17:57:44.854942 systemd[1]: cri-containerd-fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c.scope: Deactivated successfully. Jan 23 17:57:44.863251 containerd[2009]: time="2026-01-23T17:57:44.863189456Z" level=info msg="received container exit event container_id:\"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\" id:\"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\" pid:4131 exited_at:{seconds:1769191064 nanos:861986564}" Jan 23 17:57:44.880013 systemd[1]: Started cri-containerd-ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14.scope - libcontainer container ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14. Jan 23 17:57:44.968453 containerd[2009]: time="2026-01-23T17:57:44.968391692Z" level=info msg="StartContainer for \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" returns successfully" Jan 23 17:57:45.341518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c-rootfs.mount: Deactivated successfully. Jan 23 17:57:45.582393 containerd[2009]: time="2026-01-23T17:57:45.580964131Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 17:57:45.616290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935480393.mount: Deactivated successfully. Jan 23 17:57:45.623008 containerd[2009]: time="2026-01-23T17:57:45.621780955Z" level=info msg="Container 1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:45.643356 containerd[2009]: time="2026-01-23T17:57:45.643275139Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\"" Jan 23 17:57:45.646641 containerd[2009]: time="2026-01-23T17:57:45.645262447Z" level=info msg="StartContainer for \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\"" Jan 23 17:57:45.650309 containerd[2009]: time="2026-01-23T17:57:45.650229331Z" level=info msg="connecting to shim 1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75" address="unix:///run/containerd/s/650cbeef16dda4f65e8099478081bc53c7a391339923470603f5365f925474aa" protocol=ttrpc version=3 Jan 23 17:57:45.728664 systemd[1]: Started cri-containerd-1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75.scope - libcontainer container 1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75. Jan 23 17:57:45.875640 systemd[1]: cri-containerd-1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75.scope: Deactivated successfully. Jan 23 17:57:45.880859 containerd[2009]: time="2026-01-23T17:57:45.879534429Z" level=info msg="received container exit event container_id:\"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\" id:\"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\" pid:4209 exited_at:{seconds:1769191065 nanos:879002265}" Jan 23 17:57:45.887895 containerd[2009]: time="2026-01-23T17:57:45.887790081Z" level=info msg="StartContainer for \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\" returns successfully" Jan 23 17:57:45.916864 kubelet[3615]: I0123 17:57:45.916741 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-467sv" podStartSLOduration=1.998352515 podStartE2EDuration="12.916712253s" podCreationTimestamp="2026-01-23 17:57:33 +0000 UTC" firstStartedPulling="2026-01-23 17:57:33.837552633 +0000 UTC m=+6.774528503" lastFinishedPulling="2026-01-23 17:57:44.755912383 +0000 UTC m=+17.692888241" observedRunningTime="2026-01-23 17:57:45.69526916 +0000 UTC m=+18.632245042" watchObservedRunningTime="2026-01-23 17:57:45.916712253 +0000 UTC m=+18.853688231" Jan 23 17:57:46.338905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75-rootfs.mount: Deactivated successfully. Jan 23 17:57:46.599809 containerd[2009]: time="2026-01-23T17:57:46.599671052Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 17:57:46.634998 containerd[2009]: time="2026-01-23T17:57:46.634879160Z" level=info msg="Container adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:46.672687 containerd[2009]: time="2026-01-23T17:57:46.672270789Z" level=info msg="CreateContainer within sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\"" Jan 23 17:57:46.674394 containerd[2009]: time="2026-01-23T17:57:46.674156313Z" level=info msg="StartContainer for \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\"" Jan 23 17:57:46.677500 containerd[2009]: time="2026-01-23T17:57:46.677412837Z" level=info msg="connecting to shim adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8" address="unix:///run/containerd/s/650cbeef16dda4f65e8099478081bc53c7a391339923470603f5365f925474aa" protocol=ttrpc version=3 Jan 23 17:57:46.726925 systemd[1]: Started cri-containerd-adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8.scope - libcontainer container adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8. Jan 23 17:57:46.823985 containerd[2009]: time="2026-01-23T17:57:46.823752141Z" level=info msg="StartContainer for \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" returns successfully" Jan 23 17:57:46.994853 kubelet[3615]: I0123 17:57:46.992795 3615 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:57:47.063036 systemd[1]: Created slice kubepods-burstable-podab029130_9db3_4ca3_b5e6_d86b36e9a12e.slice - libcontainer container kubepods-burstable-podab029130_9db3_4ca3_b5e6_d86b36e9a12e.slice. Jan 23 17:57:47.087630 systemd[1]: Created slice kubepods-burstable-poda6070544_7791_4900_b273_4d9ec5a6ecec.slice - libcontainer container kubepods-burstable-poda6070544_7791_4900_b273_4d9ec5a6ecec.slice. Jan 23 17:57:47.147275 kubelet[3615]: I0123 17:57:47.147038 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwnv\" (UniqueName: \"kubernetes.io/projected/ab029130-9db3-4ca3-b5e6-d86b36e9a12e-kube-api-access-twwnv\") pod \"coredns-668d6bf9bc-zzctp\" (UID: \"ab029130-9db3-4ca3-b5e6-d86b36e9a12e\") " pod="kube-system/coredns-668d6bf9bc-zzctp" Jan 23 17:57:47.147275 kubelet[3615]: I0123 17:57:47.147113 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6070544-7791-4900-b273-4d9ec5a6ecec-config-volume\") pod \"coredns-668d6bf9bc-9wmjx\" (UID: \"a6070544-7791-4900-b273-4d9ec5a6ecec\") " pod="kube-system/coredns-668d6bf9bc-9wmjx" Jan 23 17:57:47.147275 kubelet[3615]: I0123 17:57:47.147156 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab029130-9db3-4ca3-b5e6-d86b36e9a12e-config-volume\") pod \"coredns-668d6bf9bc-zzctp\" (UID: \"ab029130-9db3-4ca3-b5e6-d86b36e9a12e\") " pod="kube-system/coredns-668d6bf9bc-zzctp" Jan 23 17:57:47.147275 kubelet[3615]: I0123 17:57:47.147201 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcf7k\" (UniqueName: \"kubernetes.io/projected/a6070544-7791-4900-b273-4d9ec5a6ecec-kube-api-access-lcf7k\") pod \"coredns-668d6bf9bc-9wmjx\" (UID: \"a6070544-7791-4900-b273-4d9ec5a6ecec\") " pod="kube-system/coredns-668d6bf9bc-9wmjx" Jan 23 17:57:47.384103 containerd[2009]: time="2026-01-23T17:57:47.383072984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzctp,Uid:ab029130-9db3-4ca3-b5e6-d86b36e9a12e,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:47.399839 containerd[2009]: time="2026-01-23T17:57:47.399791972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wmjx,Uid:a6070544-7791-4900-b273-4d9ec5a6ecec,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:50.117572 systemd-networkd[1830]: cilium_host: Link UP Jan 23 17:57:50.119410 (udev-worker)[4345]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:50.120353 (udev-worker)[4347]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:50.122259 systemd-networkd[1830]: cilium_net: Link UP Jan 23 17:57:50.122591 systemd-networkd[1830]: cilium_net: Gained carrier Jan 23 17:57:50.125007 systemd-networkd[1830]: cilium_host: Gained carrier Jan 23 17:57:50.307296 systemd-networkd[1830]: cilium_vxlan: Link UP Jan 23 17:57:50.307315 systemd-networkd[1830]: cilium_vxlan: Gained carrier Jan 23 17:57:50.864785 kernel: NET: Registered PF_ALG protocol family Jan 23 17:57:50.956371 systemd-networkd[1830]: cilium_net: Gained IPv6LL Jan 23 17:57:51.083874 systemd-networkd[1830]: cilium_host: Gained IPv6LL Jan 23 17:57:52.204397 (udev-worker)[4388]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:52.231562 systemd-networkd[1830]: lxc_health: Link UP Jan 23 17:57:52.242129 systemd-networkd[1830]: cilium_vxlan: Gained IPv6LL Jan 23 17:57:52.243822 systemd-networkd[1830]: lxc_health: Gained carrier Jan 23 17:57:52.525647 kernel: eth0: renamed from tmpe693b Jan 23 17:57:52.523275 (udev-worker)[4389]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:52.531419 systemd-networkd[1830]: lxc687f4b965f15: Link UP Jan 23 17:57:52.532021 systemd-networkd[1830]: lxc7290fb55cdb5: Link UP Jan 23 17:57:52.538154 systemd-networkd[1830]: lxc687f4b965f15: Gained carrier Jan 23 17:57:52.540671 kernel: eth0: renamed from tmp413a3 Jan 23 17:57:52.553009 systemd-networkd[1830]: lxc7290fb55cdb5: Gained carrier Jan 23 17:57:53.389876 kubelet[3615]: I0123 17:57:53.389770 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-clg9v" podStartSLOduration=12.714184027 podStartE2EDuration="21.389745794s" podCreationTimestamp="2026-01-23 17:57:32 +0000 UTC" firstStartedPulling="2026-01-23 17:57:33.641366408 +0000 UTC m=+6.578342266" lastFinishedPulling="2026-01-23 17:57:42.316928163 +0000 UTC m=+15.253904033" observedRunningTime="2026-01-23 17:57:47.655597257 +0000 UTC m=+20.592573139" watchObservedRunningTime="2026-01-23 17:57:53.389745794 +0000 UTC m=+26.326721664" Jan 23 17:57:53.453815 systemd-networkd[1830]: lxc_health: Gained IPv6LL Jan 23 17:57:53.964810 systemd-networkd[1830]: lxc7290fb55cdb5: Gained IPv6LL Jan 23 17:57:54.540694 systemd-networkd[1830]: lxc687f4b965f15: Gained IPv6LL Jan 23 17:57:57.207169 ntpd[2196]: Listen normally on 6 cilium_host 192.168.0.90:123 Jan 23 17:57:57.208519 ntpd[2196]: 23 Jan 17:57:57 ntpd[2196]: Listen normally on 6 cilium_host 192.168.0.90:123 Jan 23 17:57:57.208519 ntpd[2196]: 23 Jan 17:57:57 ntpd[2196]: Listen normally on 7 cilium_net [fe80::5439:c6ff:fedd:846%4]:123 Jan 23 17:57:57.208519 ntpd[2196]: 23 Jan 17:57:57 ntpd[2196]: Listen normally on 8 cilium_host [fe80::c0b4:feff:fe1b:6ee1%5]:123 Jan 23 17:57:57.208519 ntpd[2196]: 23 Jan 17:57:57 ntpd[2196]: Listen normally on 9 cilium_vxlan [fe80::2c98:9cff:fe93:5072%6]:123 Jan 23 17:57:57.208519 ntpd[2196]: 23 Jan 17:57:57 ntpd[2196]: Listen normally on 10 lxc_health [fe80::b0cf:b3ff:fee2:37e7%8]:123 Jan 23 17:57:57.208519 ntpd[2196]: 23 Jan 17:57:57 ntpd[2196]: Listen normally on 11 lxc687f4b965f15 [fe80::38fb:21ff:fe6a:f30f%10]:123 Jan 23 17:57:57.208519 ntpd[2196]: 23 Jan 17:57:57 ntpd[2196]: Listen normally on 12 lxc7290fb55cdb5 [fe80::e8f6:dfff:fe40:15f5%12]:123 Jan 23 17:57:57.207256 ntpd[2196]: Listen normally on 7 cilium_net [fe80::5439:c6ff:fedd:846%4]:123 Jan 23 17:57:57.207302 ntpd[2196]: Listen normally on 8 cilium_host [fe80::c0b4:feff:fe1b:6ee1%5]:123 Jan 23 17:57:57.207345 ntpd[2196]: Listen normally on 9 cilium_vxlan [fe80::2c98:9cff:fe93:5072%6]:123 Jan 23 17:57:57.207388 ntpd[2196]: Listen normally on 10 lxc_health [fe80::b0cf:b3ff:fee2:37e7%8]:123 Jan 23 17:57:57.207431 ntpd[2196]: Listen normally on 11 lxc687f4b965f15 [fe80::38fb:21ff:fe6a:f30f%10]:123 Jan 23 17:57:57.207473 ntpd[2196]: Listen normally on 12 lxc7290fb55cdb5 [fe80::e8f6:dfff:fe40:15f5%12]:123 Jan 23 17:58:00.953482 containerd[2009]: time="2026-01-23T17:58:00.953335979Z" level=info msg="connecting to shim 413a33264c24e9dd27b35da51599e1bb9902ec441e09eab2096ee8e317519a84" address="unix:///run/containerd/s/6852c6700828776993c9b7ce2d50f2741357e07bc8c42eb86e399a152dccc4ee" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:00.963954 containerd[2009]: time="2026-01-23T17:58:00.963881064Z" level=info msg="connecting to shim e693b310f27b87e32fa40559fe4ef2836c92836f8868f1530704554172843232" address="unix:///run/containerd/s/572feb3fd6c9e3cc4503a0678ef6126db657eea2e85ea688b912ffbf547f920f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:01.038932 systemd[1]: Started cri-containerd-413a33264c24e9dd27b35da51599e1bb9902ec441e09eab2096ee8e317519a84.scope - libcontainer container 413a33264c24e9dd27b35da51599e1bb9902ec441e09eab2096ee8e317519a84. Jan 23 17:58:01.079957 systemd[1]: Started cri-containerd-e693b310f27b87e32fa40559fe4ef2836c92836f8868f1530704554172843232.scope - libcontainer container e693b310f27b87e32fa40559fe4ef2836c92836f8868f1530704554172843232. Jan 23 17:58:01.183551 containerd[2009]: time="2026-01-23T17:58:01.183480873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wmjx,Uid:a6070544-7791-4900-b273-4d9ec5a6ecec,Namespace:kube-system,Attempt:0,} returns sandbox id \"413a33264c24e9dd27b35da51599e1bb9902ec441e09eab2096ee8e317519a84\"" Jan 23 17:58:01.192368 containerd[2009]: time="2026-01-23T17:58:01.192220401Z" level=info msg="CreateContainer within sandbox \"413a33264c24e9dd27b35da51599e1bb9902ec441e09eab2096ee8e317519a84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:01.216168 containerd[2009]: time="2026-01-23T17:58:01.214677609Z" level=info msg="Container 3115e399e7b406dacb4ef8b064a27c220385b5ee8e9d18e3eb391c17734de8c8: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:01.234433 containerd[2009]: time="2026-01-23T17:58:01.234371841Z" level=info msg="CreateContainer within sandbox \"413a33264c24e9dd27b35da51599e1bb9902ec441e09eab2096ee8e317519a84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3115e399e7b406dacb4ef8b064a27c220385b5ee8e9d18e3eb391c17734de8c8\"" Jan 23 17:58:01.240050 containerd[2009]: time="2026-01-23T17:58:01.240001893Z" level=info msg="StartContainer for \"3115e399e7b406dacb4ef8b064a27c220385b5ee8e9d18e3eb391c17734de8c8\"" Jan 23 17:58:01.248162 containerd[2009]: time="2026-01-23T17:58:01.248075289Z" level=info msg="connecting to shim 3115e399e7b406dacb4ef8b064a27c220385b5ee8e9d18e3eb391c17734de8c8" address="unix:///run/containerd/s/6852c6700828776993c9b7ce2d50f2741357e07bc8c42eb86e399a152dccc4ee" protocol=ttrpc version=3 Jan 23 17:58:01.272927 containerd[2009]: time="2026-01-23T17:58:01.272852937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzctp,Uid:ab029130-9db3-4ca3-b5e6-d86b36e9a12e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e693b310f27b87e32fa40559fe4ef2836c92836f8868f1530704554172843232\"" Jan 23 17:58:01.283202 containerd[2009]: time="2026-01-23T17:58:01.282176157Z" level=info msg="CreateContainer within sandbox \"e693b310f27b87e32fa40559fe4ef2836c92836f8868f1530704554172843232\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:01.308962 systemd[1]: Started cri-containerd-3115e399e7b406dacb4ef8b064a27c220385b5ee8e9d18e3eb391c17734de8c8.scope - libcontainer container 3115e399e7b406dacb4ef8b064a27c220385b5ee8e9d18e3eb391c17734de8c8. Jan 23 17:58:01.311653 containerd[2009]: time="2026-01-23T17:58:01.311349033Z" level=info msg="Container 158af80534bd2b01723ea58cc4437c4bab65b9e1a6780d2fb04bae92fb0fc3f6: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:01.326796 containerd[2009]: time="2026-01-23T17:58:01.326718501Z" level=info msg="CreateContainer within sandbox \"e693b310f27b87e32fa40559fe4ef2836c92836f8868f1530704554172843232\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"158af80534bd2b01723ea58cc4437c4bab65b9e1a6780d2fb04bae92fb0fc3f6\"" Jan 23 17:58:01.328653 containerd[2009]: time="2026-01-23T17:58:01.328497369Z" level=info msg="StartContainer for \"158af80534bd2b01723ea58cc4437c4bab65b9e1a6780d2fb04bae92fb0fc3f6\"" Jan 23 17:58:01.330824 containerd[2009]: time="2026-01-23T17:58:01.330695949Z" level=info msg="connecting to shim 158af80534bd2b01723ea58cc4437c4bab65b9e1a6780d2fb04bae92fb0fc3f6" address="unix:///run/containerd/s/572feb3fd6c9e3cc4503a0678ef6126db657eea2e85ea688b912ffbf547f920f" protocol=ttrpc version=3 Jan 23 17:58:01.365068 systemd[1]: Started cri-containerd-158af80534bd2b01723ea58cc4437c4bab65b9e1a6780d2fb04bae92fb0fc3f6.scope - libcontainer container 158af80534bd2b01723ea58cc4437c4bab65b9e1a6780d2fb04bae92fb0fc3f6. Jan 23 17:58:01.421256 containerd[2009]: time="2026-01-23T17:58:01.420101914Z" level=info msg="StartContainer for \"3115e399e7b406dacb4ef8b064a27c220385b5ee8e9d18e3eb391c17734de8c8\" returns successfully" Jan 23 17:58:01.453657 containerd[2009]: time="2026-01-23T17:58:01.453212650Z" level=info msg="StartContainer for \"158af80534bd2b01723ea58cc4437c4bab65b9e1a6780d2fb04bae92fb0fc3f6\" returns successfully" Jan 23 17:58:01.690494 kubelet[3615]: I0123 17:58:01.690337 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zzctp" podStartSLOduration=28.690314651 podStartE2EDuration="28.690314651s" podCreationTimestamp="2026-01-23 17:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:01.689028215 +0000 UTC m=+34.626004097" watchObservedRunningTime="2026-01-23 17:58:01.690314651 +0000 UTC m=+34.627290557" Jan 23 17:58:13.343572 systemd[1]: Started sshd@9-172.31.17.161:22-68.220.241.50:44114.service - OpenSSH per-connection server daemon (68.220.241.50:44114). Jan 23 17:58:13.877523 sshd[4920]: Accepted publickey for core from 68.220.241.50 port 44114 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:13.879835 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:13.888652 systemd-logind[1979]: New session 10 of user core. Jan 23 17:58:13.893911 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:58:14.376843 sshd[4923]: Connection closed by 68.220.241.50 port 44114 Jan 23 17:58:14.378940 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:14.386241 systemd[1]: sshd@9-172.31.17.161:22-68.220.241.50:44114.service: Deactivated successfully. Jan 23 17:58:14.387157 systemd-logind[1979]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:58:14.390968 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:58:14.396338 systemd-logind[1979]: Removed session 10. Jan 23 17:58:19.471041 systemd[1]: Started sshd@10-172.31.17.161:22-68.220.241.50:44130.service - OpenSSH per-connection server daemon (68.220.241.50:44130). Jan 23 17:58:19.984734 sshd[4936]: Accepted publickey for core from 68.220.241.50 port 44130 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:19.987009 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:19.996687 systemd-logind[1979]: New session 11 of user core. Jan 23 17:58:20.005096 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:58:20.458371 sshd[4939]: Connection closed by 68.220.241.50 port 44130 Jan 23 17:58:20.458896 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:20.466401 systemd[1]: sshd@10-172.31.17.161:22-68.220.241.50:44130.service: Deactivated successfully. Jan 23 17:58:20.470665 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:58:20.472963 systemd-logind[1979]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:58:20.476740 systemd-logind[1979]: Removed session 11. Jan 23 17:58:25.562221 systemd[1]: Started sshd@11-172.31.17.161:22-68.220.241.50:38372.service - OpenSSH per-connection server daemon (68.220.241.50:38372). Jan 23 17:58:26.101403 sshd[4953]: Accepted publickey for core from 68.220.241.50 port 38372 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:26.103215 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:26.111139 systemd-logind[1979]: New session 12 of user core. Jan 23 17:58:26.123842 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:58:26.578285 sshd[4956]: Connection closed by 68.220.241.50 port 38372 Jan 23 17:58:26.578162 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:26.586851 systemd-logind[1979]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:58:26.587039 systemd[1]: sshd@11-172.31.17.161:22-68.220.241.50:38372.service: Deactivated successfully. Jan 23 17:58:26.592598 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:58:26.597469 systemd-logind[1979]: Removed session 12. Jan 23 17:58:31.669557 systemd[1]: Started sshd@12-172.31.17.161:22-68.220.241.50:38376.service - OpenSSH per-connection server daemon (68.220.241.50:38376). Jan 23 17:58:32.183502 sshd[4972]: Accepted publickey for core from 68.220.241.50 port 38376 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:32.185968 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:32.195014 systemd-logind[1979]: New session 13 of user core. Jan 23 17:58:32.210865 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:58:32.671705 sshd[4975]: Connection closed by 68.220.241.50 port 38376 Jan 23 17:58:32.672887 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:32.680178 systemd-logind[1979]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:58:32.680694 systemd[1]: sshd@12-172.31.17.161:22-68.220.241.50:38376.service: Deactivated successfully. Jan 23 17:58:32.686088 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:58:32.689585 systemd-logind[1979]: Removed session 13. Jan 23 17:58:37.768848 systemd[1]: Started sshd@13-172.31.17.161:22-68.220.241.50:44384.service - OpenSSH per-connection server daemon (68.220.241.50:44384). Jan 23 17:58:38.293022 sshd[4992]: Accepted publickey for core from 68.220.241.50 port 44384 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:38.296104 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:38.303953 systemd-logind[1979]: New session 14 of user core. Jan 23 17:58:38.312875 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:58:38.760657 sshd[4995]: Connection closed by 68.220.241.50 port 44384 Jan 23 17:58:38.760438 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:38.768106 systemd[1]: sshd@13-172.31.17.161:22-68.220.241.50:44384.service: Deactivated successfully. Jan 23 17:58:38.772757 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:58:38.776956 systemd-logind[1979]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:58:38.780212 systemd-logind[1979]: Removed session 14. Jan 23 17:58:38.876150 systemd[1]: Started sshd@14-172.31.17.161:22-68.220.241.50:44392.service - OpenSSH per-connection server daemon (68.220.241.50:44392). Jan 23 17:58:39.433648 sshd[5008]: Accepted publickey for core from 68.220.241.50 port 44392 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:39.435871 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:39.443735 systemd-logind[1979]: New session 15 of user core. Jan 23 17:58:39.453105 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:58:40.009344 sshd[5011]: Connection closed by 68.220.241.50 port 44392 Jan 23 17:58:40.010219 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:40.017314 systemd[1]: sshd@14-172.31.17.161:22-68.220.241.50:44392.service: Deactivated successfully. Jan 23 17:58:40.023248 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:58:40.026117 systemd-logind[1979]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:58:40.028720 systemd-logind[1979]: Removed session 15. Jan 23 17:58:40.102065 systemd[1]: Started sshd@15-172.31.17.161:22-68.220.241.50:44408.service - OpenSSH per-connection server daemon (68.220.241.50:44408). Jan 23 17:58:40.619883 sshd[5020]: Accepted publickey for core from 68.220.241.50 port 44408 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:40.621586 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:40.630420 systemd-logind[1979]: New session 16 of user core. Jan 23 17:58:40.638871 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:58:41.091067 sshd[5023]: Connection closed by 68.220.241.50 port 44408 Jan 23 17:58:41.089833 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:41.097107 systemd[1]: sshd@15-172.31.17.161:22-68.220.241.50:44408.service: Deactivated successfully. Jan 23 17:58:41.101546 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:58:41.104352 systemd-logind[1979]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:58:41.107053 systemd-logind[1979]: Removed session 16. Jan 23 17:58:46.198167 systemd[1]: Started sshd@16-172.31.17.161:22-68.220.241.50:38060.service - OpenSSH per-connection server daemon (68.220.241.50:38060). Jan 23 17:58:46.749665 sshd[5034]: Accepted publickey for core from 68.220.241.50 port 38060 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:46.751755 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:46.759699 systemd-logind[1979]: New session 17 of user core. Jan 23 17:58:46.769883 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:58:47.243678 sshd[5037]: Connection closed by 68.220.241.50 port 38060 Jan 23 17:58:47.243940 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:47.251652 systemd[1]: sshd@16-172.31.17.161:22-68.220.241.50:38060.service: Deactivated successfully. Jan 23 17:58:47.256205 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:58:47.259248 systemd-logind[1979]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:58:47.262802 systemd-logind[1979]: Removed session 17. Jan 23 17:58:52.334461 systemd[1]: Started sshd@17-172.31.17.161:22-68.220.241.50:38062.service - OpenSSH per-connection server daemon (68.220.241.50:38062). Jan 23 17:58:52.619109 update_engine[1980]: I20260123 17:58:52.616712 1980 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 17:58:52.619109 update_engine[1980]: I20260123 17:58:52.616775 1980 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 17:58:52.619109 update_engine[1980]: I20260123 17:58:52.617174 1980 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 17:58:52.620221 update_engine[1980]: I20260123 17:58:52.620157 1980 omaha_request_params.cc:62] Current group set to stable Jan 23 17:58:52.620370 update_engine[1980]: I20260123 17:58:52.620321 1980 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 17:58:52.620370 update_engine[1980]: I20260123 17:58:52.620352 1980 update_attempter.cc:643] Scheduling an action processor start. Jan 23 17:58:52.620474 update_engine[1980]: I20260123 17:58:52.620386 1980 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 17:58:52.620474 update_engine[1980]: I20260123 17:58:52.620458 1980 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 17:58:52.620648 update_engine[1980]: I20260123 17:58:52.620576 1980 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 17:58:52.621551 update_engine[1980]: I20260123 17:58:52.621498 1980 omaha_request_action.cc:272] Request: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.621551 update_engine[1980]: Jan 23 17:58:52.622038 update_engine[1980]: I20260123 17:58:52.621542 1980 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:58:52.623005 locksmithd[2029]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 17:58:52.624855 update_engine[1980]: I20260123 17:58:52.624791 1980 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:58:52.626007 update_engine[1980]: I20260123 17:58:52.625942 1980 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:58:52.650311 update_engine[1980]: E20260123 17:58:52.650207 1980 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:58:52.650460 update_engine[1980]: I20260123 17:58:52.650352 1980 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 17:58:52.851720 sshd[5049]: Accepted publickey for core from 68.220.241.50 port 38062 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:52.854067 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:52.863793 systemd-logind[1979]: New session 18 of user core. Jan 23 17:58:52.872900 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:58:53.322309 sshd[5052]: Connection closed by 68.220.241.50 port 38062 Jan 23 17:58:53.323968 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:53.331170 systemd[1]: sshd@17-172.31.17.161:22-68.220.241.50:38062.service: Deactivated successfully. Jan 23 17:58:53.335553 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:58:53.338444 systemd-logind[1979]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:58:53.342224 systemd-logind[1979]: Removed session 18. Jan 23 17:58:53.427801 systemd[1]: Started sshd@18-172.31.17.161:22-68.220.241.50:57862.service - OpenSSH per-connection server daemon (68.220.241.50:57862). Jan 23 17:58:53.989119 sshd[5063]: Accepted publickey for core from 68.220.241.50 port 57862 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:53.991726 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:53.999913 systemd-logind[1979]: New session 19 of user core. Jan 23 17:58:54.008854 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:58:54.573285 sshd[5066]: Connection closed by 68.220.241.50 port 57862 Jan 23 17:58:54.573777 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:54.581249 systemd[1]: sshd@18-172.31.17.161:22-68.220.241.50:57862.service: Deactivated successfully. Jan 23 17:58:54.586580 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:58:54.588483 systemd-logind[1979]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:58:54.591628 systemd-logind[1979]: Removed session 19. Jan 23 17:58:54.663922 systemd[1]: Started sshd@19-172.31.17.161:22-68.220.241.50:57864.service - OpenSSH per-connection server daemon (68.220.241.50:57864). Jan 23 17:58:55.177171 sshd[5075]: Accepted publickey for core from 68.220.241.50 port 57864 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:55.179570 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:55.187875 systemd-logind[1979]: New session 20 of user core. Jan 23 17:58:55.201901 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 17:58:56.493039 sshd[5078]: Connection closed by 68.220.241.50 port 57864 Jan 23 17:58:56.493510 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:56.501048 systemd[1]: sshd@19-172.31.17.161:22-68.220.241.50:57864.service: Deactivated successfully. Jan 23 17:58:56.502858 systemd-logind[1979]: Session 20 logged out. Waiting for processes to exit. Jan 23 17:58:56.506414 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 17:58:56.515355 systemd-logind[1979]: Removed session 20. Jan 23 17:58:56.599074 systemd[1]: Started sshd@20-172.31.17.161:22-68.220.241.50:57872.service - OpenSSH per-connection server daemon (68.220.241.50:57872). Jan 23 17:58:57.164027 sshd[5095]: Accepted publickey for core from 68.220.241.50 port 57872 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:57.166413 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:57.175879 systemd-logind[1979]: New session 21 of user core. Jan 23 17:58:57.187907 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 17:58:57.917854 sshd[5098]: Connection closed by 68.220.241.50 port 57872 Jan 23 17:58:57.918922 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:57.926790 systemd[1]: sshd@20-172.31.17.161:22-68.220.241.50:57872.service: Deactivated successfully. Jan 23 17:58:57.933207 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 17:58:57.935993 systemd-logind[1979]: Session 21 logged out. Waiting for processes to exit. Jan 23 17:58:57.939567 systemd-logind[1979]: Removed session 21. Jan 23 17:58:58.003106 systemd[1]: Started sshd@21-172.31.17.161:22-68.220.241.50:57888.service - OpenSSH per-connection server daemon (68.220.241.50:57888). Jan 23 17:58:58.536797 sshd[5107]: Accepted publickey for core from 68.220.241.50 port 57888 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:58.538327 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:58.548017 systemd-logind[1979]: New session 22 of user core. Jan 23 17:58:58.558898 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 17:58:58.999330 sshd[5110]: Connection closed by 68.220.241.50 port 57888 Jan 23 17:58:58.998456 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:59.005936 systemd[1]: sshd@21-172.31.17.161:22-68.220.241.50:57888.service: Deactivated successfully. Jan 23 17:58:59.013224 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 17:58:59.017456 systemd-logind[1979]: Session 22 logged out. Waiting for processes to exit. Jan 23 17:58:59.022423 systemd-logind[1979]: Removed session 22. Jan 23 17:59:02.614351 update_engine[1980]: I20260123 17:59:02.613631 1980 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:59:02.614351 update_engine[1980]: I20260123 17:59:02.613739 1980 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:59:02.614351 update_engine[1980]: I20260123 17:59:02.614215 1980 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:59:02.615556 update_engine[1980]: E20260123 17:59:02.615511 1980 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:59:02.616926 update_engine[1980]: I20260123 17:59:02.615733 1980 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 17:59:04.106918 systemd[1]: Started sshd@22-172.31.17.161:22-68.220.241.50:55522.service - OpenSSH per-connection server daemon (68.220.241.50:55522). Jan 23 17:59:04.667688 sshd[5123]: Accepted publickey for core from 68.220.241.50 port 55522 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:04.670864 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:04.681268 systemd-logind[1979]: New session 23 of user core. Jan 23 17:59:04.690864 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 17:59:05.163628 sshd[5128]: Connection closed by 68.220.241.50 port 55522 Jan 23 17:59:05.162823 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:05.170304 systemd-logind[1979]: Session 23 logged out. Waiting for processes to exit. Jan 23 17:59:05.171772 systemd[1]: sshd@22-172.31.17.161:22-68.220.241.50:55522.service: Deactivated successfully. Jan 23 17:59:05.177283 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 17:59:05.181980 systemd-logind[1979]: Removed session 23. Jan 23 17:59:10.250172 systemd[1]: Started sshd@23-172.31.17.161:22-68.220.241.50:55524.service - OpenSSH per-connection server daemon (68.220.241.50:55524). Jan 23 17:59:10.784658 sshd[5140]: Accepted publickey for core from 68.220.241.50 port 55524 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:10.786550 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:10.795110 systemd-logind[1979]: New session 24 of user core. Jan 23 17:59:10.809858 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 17:59:11.248632 sshd[5143]: Connection closed by 68.220.241.50 port 55524 Jan 23 17:59:11.249479 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:11.258434 systemd[1]: sshd@23-172.31.17.161:22-68.220.241.50:55524.service: Deactivated successfully. Jan 23 17:59:11.263312 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 17:59:11.267835 systemd-logind[1979]: Session 24 logged out. Waiting for processes to exit. Jan 23 17:59:11.273078 systemd-logind[1979]: Removed session 24. Jan 23 17:59:12.614093 update_engine[1980]: I20260123 17:59:12.614006 1980 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:59:12.614598 update_engine[1980]: I20260123 17:59:12.614111 1980 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:59:12.614683 update_engine[1980]: I20260123 17:59:12.614656 1980 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:59:12.616122 update_engine[1980]: E20260123 17:59:12.616056 1980 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:59:12.616238 update_engine[1980]: I20260123 17:59:12.616181 1980 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 17:59:16.362052 systemd[1]: Started sshd@24-172.31.17.161:22-68.220.241.50:33286.service - OpenSSH per-connection server daemon (68.220.241.50:33286). Jan 23 17:59:16.930173 sshd[5155]: Accepted publickey for core from 68.220.241.50 port 33286 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:16.932561 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:16.940468 systemd-logind[1979]: New session 25 of user core. Jan 23 17:59:16.951856 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 17:59:17.425356 sshd[5158]: Connection closed by 68.220.241.50 port 33286 Jan 23 17:59:17.425970 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:17.433412 systemd-logind[1979]: Session 25 logged out. Waiting for processes to exit. Jan 23 17:59:17.434982 systemd[1]: sshd@24-172.31.17.161:22-68.220.241.50:33286.service: Deactivated successfully. Jan 23 17:59:17.439821 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 17:59:17.443919 systemd-logind[1979]: Removed session 25. Jan 23 17:59:17.512504 systemd[1]: Started sshd@25-172.31.17.161:22-68.220.241.50:33302.service - OpenSSH per-connection server daemon (68.220.241.50:33302). Jan 23 17:59:18.028664 sshd[5169]: Accepted publickey for core from 68.220.241.50 port 33302 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:18.032198 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:18.039985 systemd-logind[1979]: New session 26 of user core. Jan 23 17:59:18.054899 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 17:59:21.493657 kubelet[3615]: I0123 17:59:21.492965 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9wmjx" podStartSLOduration=108.492942196 podStartE2EDuration="1m48.492942196s" podCreationTimestamp="2026-01-23 17:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:01.768174936 +0000 UTC m=+34.705150830" watchObservedRunningTime="2026-01-23 17:59:21.492942196 +0000 UTC m=+114.429918054" Jan 23 17:59:21.538911 containerd[2009]: time="2026-01-23T17:59:21.538824748Z" level=info msg="StopContainer for \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" with timeout 30 (s)" Jan 23 17:59:21.540895 containerd[2009]: time="2026-01-23T17:59:21.540805840Z" level=info msg="Stop container \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" with signal terminated" Jan 23 17:59:21.565668 containerd[2009]: time="2026-01-23T17:59:21.565377652Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:59:21.569990 systemd[1]: cri-containerd-ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14.scope: Deactivated successfully. Jan 23 17:59:21.576470 containerd[2009]: time="2026-01-23T17:59:21.576355816Z" level=info msg="received container exit event container_id:\"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" id:\"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" pid:4168 exited_at:{seconds:1769191161 nanos:575598352}" Jan 23 17:59:21.584197 containerd[2009]: time="2026-01-23T17:59:21.584148400Z" level=info msg="StopContainer for \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" with timeout 2 (s)" Jan 23 17:59:21.586110 containerd[2009]: time="2026-01-23T17:59:21.586009168Z" level=info msg="Stop container \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" with signal terminated" Jan 23 17:59:21.603244 systemd-networkd[1830]: lxc_health: Link DOWN Jan 23 17:59:21.603260 systemd-networkd[1830]: lxc_health: Lost carrier Jan 23 17:59:21.642209 systemd[1]: cri-containerd-adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8.scope: Deactivated successfully. Jan 23 17:59:21.642799 systemd[1]: cri-containerd-adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8.scope: Consumed 14.252s CPU time, 126.2M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 17:59:21.647187 containerd[2009]: time="2026-01-23T17:59:21.646924528Z" level=info msg="received container exit event container_id:\"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" id:\"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" pid:4248 exited_at:{seconds:1769191161 nanos:646526980}" Jan 23 17:59:21.660475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14-rootfs.mount: Deactivated successfully. Jan 23 17:59:21.686988 containerd[2009]: time="2026-01-23T17:59:21.686831968Z" level=info msg="StopContainer for \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" returns successfully" Jan 23 17:59:21.689425 containerd[2009]: time="2026-01-23T17:59:21.688976320Z" level=info msg="StopPodSandbox for \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\"" Jan 23 17:59:21.689425 containerd[2009]: time="2026-01-23T17:59:21.689101096Z" level=info msg="Container to stop \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:21.707108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8-rootfs.mount: Deactivated successfully. Jan 23 17:59:21.711372 systemd[1]: cri-containerd-dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680.scope: Deactivated successfully. Jan 23 17:59:21.716373 containerd[2009]: time="2026-01-23T17:59:21.716203961Z" level=info msg="received sandbox exit event container_id:\"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" id:\"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" exit_status:137 exited_at:{seconds:1769191161 nanos:711827813}" monitor_name=podsandbox Jan 23 17:59:21.729499 containerd[2009]: time="2026-01-23T17:59:21.729445337Z" level=info msg="StopContainer for \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" returns successfully" Jan 23 17:59:21.730898 containerd[2009]: time="2026-01-23T17:59:21.730390349Z" level=info msg="StopPodSandbox for \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\"" Jan 23 17:59:21.730898 containerd[2009]: time="2026-01-23T17:59:21.730487249Z" level=info msg="Container to stop \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:21.730898 containerd[2009]: time="2026-01-23T17:59:21.730514285Z" level=info msg="Container to stop \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:21.730898 containerd[2009]: time="2026-01-23T17:59:21.730535729Z" level=info msg="Container to stop \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:21.730898 containerd[2009]: time="2026-01-23T17:59:21.730556957Z" level=info msg="Container to stop \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:21.730898 containerd[2009]: time="2026-01-23T17:59:21.730585145Z" level=info msg="Container to stop \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:21.751015 systemd[1]: cri-containerd-e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa.scope: Deactivated successfully. Jan 23 17:59:21.755581 containerd[2009]: time="2026-01-23T17:59:21.755488253Z" level=info msg="received sandbox exit event container_id:\"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" id:\"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" exit_status:137 exited_at:{seconds:1769191161 nanos:755021057}" monitor_name=podsandbox Jan 23 17:59:21.780541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680-rootfs.mount: Deactivated successfully. Jan 23 17:59:21.792906 containerd[2009]: time="2026-01-23T17:59:21.792721925Z" level=info msg="shim disconnected" id=dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680 namespace=k8s.io Jan 23 17:59:21.792906 containerd[2009]: time="2026-01-23T17:59:21.792774317Z" level=warning msg="cleaning up after shim disconnected" id=dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680 namespace=k8s.io Jan 23 17:59:21.792906 containerd[2009]: time="2026-01-23T17:59:21.792822257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 17:59:21.822680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa-rootfs.mount: Deactivated successfully. Jan 23 17:59:21.831509 containerd[2009]: time="2026-01-23T17:59:21.830845937Z" level=info msg="TearDown network for sandbox \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" successfully" Jan 23 17:59:21.831509 containerd[2009]: time="2026-01-23T17:59:21.830895041Z" level=info msg="StopPodSandbox for \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" returns successfully" Jan 23 17:59:21.834683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680-shm.mount: Deactivated successfully. Jan 23 17:59:21.837649 containerd[2009]: time="2026-01-23T17:59:21.837403193Z" level=info msg="received sandbox container exit event sandbox_id:\"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" exit_status:137 exited_at:{seconds:1769191161 nanos:711827813}" monitor_name=criService Jan 23 17:59:21.838136 containerd[2009]: time="2026-01-23T17:59:21.837776741Z" level=info msg="shim disconnected" id=e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa namespace=k8s.io Jan 23 17:59:21.838331 containerd[2009]: time="2026-01-23T17:59:21.838133477Z" level=warning msg="cleaning up after shim disconnected" id=e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa namespace=k8s.io Jan 23 17:59:21.838331 containerd[2009]: time="2026-01-23T17:59:21.838322357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 17:59:21.874945 containerd[2009]: time="2026-01-23T17:59:21.874857065Z" level=info msg="received sandbox container exit event sandbox_id:\"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" exit_status:137 exited_at:{seconds:1769191161 nanos:755021057}" monitor_name=criService Jan 23 17:59:21.875812 containerd[2009]: time="2026-01-23T17:59:21.875647241Z" level=info msg="TearDown network for sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" successfully" Jan 23 17:59:21.875812 containerd[2009]: time="2026-01-23T17:59:21.875687333Z" level=info msg="StopPodSandbox for \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" returns successfully" Jan 23 17:59:21.905279 kubelet[3615]: I0123 17:59:21.905060 3615 scope.go:117] "RemoveContainer" containerID="ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14" Jan 23 17:59:21.914002 containerd[2009]: time="2026-01-23T17:59:21.913891158Z" level=info msg="RemoveContainer for \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\"" Jan 23 17:59:21.926868 containerd[2009]: time="2026-01-23T17:59:21.926659458Z" level=info msg="RemoveContainer for \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" returns successfully" Jan 23 17:59:21.928704 kubelet[3615]: I0123 17:59:21.928652 3615 scope.go:117] "RemoveContainer" containerID="ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14" Jan 23 17:59:21.929395 containerd[2009]: time="2026-01-23T17:59:21.929319186Z" level=error msg="ContainerStatus for \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\": not found" Jan 23 17:59:21.929887 kubelet[3615]: E0123 17:59:21.929787 3615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\": not found" containerID="ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14" Jan 23 17:59:21.929977 kubelet[3615]: I0123 17:59:21.929836 3615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14"} err="failed to get container status \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccc00a0a80d9d8540af40c7c4ee66df8f4e4c05fd4fbd7173ef4afc28c519c14\": not found" Jan 23 17:59:21.929977 kubelet[3615]: I0123 17:59:21.929958 3615 scope.go:117] "RemoveContainer" containerID="adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8" Jan 23 17:59:21.937740 containerd[2009]: time="2026-01-23T17:59:21.937655394Z" level=info msg="RemoveContainer for \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\"" Jan 23 17:59:21.953435 containerd[2009]: time="2026-01-23T17:59:21.953222526Z" level=info msg="RemoveContainer for \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" returns successfully" Jan 23 17:59:21.955102 kubelet[3615]: I0123 17:59:21.955039 3615 scope.go:117] "RemoveContainer" containerID="1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75" Jan 23 17:59:21.961814 containerd[2009]: time="2026-01-23T17:59:21.961724622Z" level=info msg="RemoveContainer for \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\"" Jan 23 17:59:21.970449 containerd[2009]: time="2026-01-23T17:59:21.970336026Z" level=info msg="RemoveContainer for \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\" returns successfully" Jan 23 17:59:21.970948 kubelet[3615]: I0123 17:59:21.970909 3615 scope.go:117] "RemoveContainer" containerID="fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c" Jan 23 17:59:21.975991 containerd[2009]: time="2026-01-23T17:59:21.975886386Z" level=info msg="RemoveContainer for \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\"" Jan 23 17:59:21.977228 kubelet[3615]: I0123 17:59:21.977176 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-cgroup\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.977352 kubelet[3615]: I0123 17:59:21.977251 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc9ec746-c5e8-4a11-9a02-9d7456ede611-clustermesh-secrets\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.977352 kubelet[3615]: I0123 17:59:21.977294 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-net\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.977352 kubelet[3615]: I0123 17:59:21.977327 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-kernel\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.977505 kubelet[3615]: I0123 17:59:21.977361 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hostproc\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.977505 kubelet[3615]: I0123 17:59:21.977391 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-etc-cni-netd\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.977505 kubelet[3615]: I0123 17:59:21.977426 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b384ef83-9e1d-4367-8ae2-52bd56f6de81-cilium-config-path\") pod \"b384ef83-9e1d-4367-8ae2-52bd56f6de81\" (UID: \"b384ef83-9e1d-4367-8ae2-52bd56f6de81\") " Jan 23 17:59:21.977505 kubelet[3615]: I0123 17:59:21.977463 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-config-path\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.977505 kubelet[3615]: I0123 17:59:21.977494 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-bpf-maps\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.978117 kubelet[3615]: I0123 17:59:21.977524 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-run\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.978117 kubelet[3615]: I0123 17:59:21.977564 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r84h\" (UniqueName: \"kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-kube-api-access-4r84h\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.978117 kubelet[3615]: I0123 17:59:21.977598 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-lib-modules\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.978117 kubelet[3615]: I0123 17:59:21.977668 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hubble-tls\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.978117 kubelet[3615]: I0123 17:59:21.977703 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cni-path\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.978117 kubelet[3615]: I0123 17:59:21.977740 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-strwt\" (UniqueName: \"kubernetes.io/projected/b384ef83-9e1d-4367-8ae2-52bd56f6de81-kube-api-access-strwt\") pod \"b384ef83-9e1d-4367-8ae2-52bd56f6de81\" (UID: \"b384ef83-9e1d-4367-8ae2-52bd56f6de81\") " Jan 23 17:59:21.978424 kubelet[3615]: I0123 17:59:21.977776 3615 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-xtables-lock\") pod \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\" (UID: \"dc9ec746-c5e8-4a11-9a02-9d7456ede611\") " Jan 23 17:59:21.978424 kubelet[3615]: I0123 17:59:21.977883 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.978424 kubelet[3615]: I0123 17:59:21.977941 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.979791 kubelet[3615]: I0123 17:59:21.979706 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.980072 kubelet[3615]: I0123 17:59:21.979800 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.980072 kubelet[3615]: I0123 17:59:21.979847 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.980072 kubelet[3615]: I0123 17:59:21.979884 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.980072 kubelet[3615]: I0123 17:59:21.979917 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.981735 kubelet[3615]: I0123 17:59:21.981672 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.987572 containerd[2009]: time="2026-01-23T17:59:21.987354786Z" level=info msg="RemoveContainer for \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\" returns successfully" Jan 23 17:59:21.988792 kubelet[3615]: I0123 17:59:21.988727 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.989309 kubelet[3615]: I0123 17:59:21.989276 3615 scope.go:117] "RemoveContainer" containerID="6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2" Jan 23 17:59:21.991777 kubelet[3615]: I0123 17:59:21.991716 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:21.995345 containerd[2009]: time="2026-01-23T17:59:21.995268162Z" level=info msg="RemoveContainer for \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\"" Jan 23 17:59:21.997638 kubelet[3615]: I0123 17:59:21.997435 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9ec746-c5e8-4a11-9a02-9d7456ede611-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:59:22.005125 kubelet[3615]: I0123 17:59:22.002939 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-kube-api-access-4r84h" (OuterVolumeSpecName: "kube-api-access-4r84h") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "kube-api-access-4r84h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:22.005125 kubelet[3615]: I0123 17:59:22.004329 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:22.007626 kubelet[3615]: I0123 17:59:22.006264 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b384ef83-9e1d-4367-8ae2-52bd56f6de81-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b384ef83-9e1d-4367-8ae2-52bd56f6de81" (UID: "b384ef83-9e1d-4367-8ae2-52bd56f6de81"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:22.009247 kubelet[3615]: I0123 17:59:22.008696 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc9ec746-c5e8-4a11-9a02-9d7456ede611" (UID: "dc9ec746-c5e8-4a11-9a02-9d7456ede611"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:22.011523 containerd[2009]: time="2026-01-23T17:59:22.011351918Z" level=info msg="RemoveContainer for \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\" returns successfully" Jan 23 17:59:22.012128 kubelet[3615]: I0123 17:59:22.011941 3615 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b384ef83-9e1d-4367-8ae2-52bd56f6de81-kube-api-access-strwt" (OuterVolumeSpecName: "kube-api-access-strwt") pod "b384ef83-9e1d-4367-8ae2-52bd56f6de81" (UID: "b384ef83-9e1d-4367-8ae2-52bd56f6de81"). InnerVolumeSpecName "kube-api-access-strwt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:22.013363 kubelet[3615]: I0123 17:59:22.013117 3615 scope.go:117] "RemoveContainer" containerID="e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e" Jan 23 17:59:22.017930 containerd[2009]: time="2026-01-23T17:59:22.016889234Z" level=info msg="RemoveContainer for \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\"" Jan 23 17:59:22.030329 containerd[2009]: time="2026-01-23T17:59:22.030249062Z" level=info msg="RemoveContainer for \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\" returns successfully" Jan 23 17:59:22.033143 kubelet[3615]: I0123 17:59:22.033085 3615 scope.go:117] "RemoveContainer" containerID="adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8" Jan 23 17:59:22.039594 containerd[2009]: time="2026-01-23T17:59:22.039493466Z" level=error msg="ContainerStatus for \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\": not found" Jan 23 17:59:22.041821 kubelet[3615]: E0123 17:59:22.041724 3615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\": not found" containerID="adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8" Jan 23 17:59:22.041821 kubelet[3615]: I0123 17:59:22.041804 3615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8"} err="failed to get container status \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"adfa01ebf2b5f46696062ef14382a8f76df285c30aca6a77065229466768dcd8\": not found" Jan 23 17:59:22.042223 kubelet[3615]: I0123 17:59:22.041848 3615 scope.go:117] "RemoveContainer" containerID="1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75" Jan 23 17:59:22.045561 containerd[2009]: time="2026-01-23T17:59:22.045244598Z" level=error msg="ContainerStatus for \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\": not found" Jan 23 17:59:22.046728 kubelet[3615]: E0123 17:59:22.046657 3615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\": not found" containerID="1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75" Jan 23 17:59:22.046894 kubelet[3615]: I0123 17:59:22.046720 3615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75"} err="failed to get container status \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c794437bacb070db073f89d05b9cd1d995214f1ec5abf5b64ec77078694ef75\": not found" Jan 23 17:59:22.046894 kubelet[3615]: I0123 17:59:22.046776 3615 scope.go:117] "RemoveContainer" containerID="fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c" Jan 23 17:59:22.047941 containerd[2009]: time="2026-01-23T17:59:22.047560250Z" level=error msg="ContainerStatus for \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\": not found" Jan 23 17:59:22.049058 kubelet[3615]: E0123 17:59:22.048971 3615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\": not found" containerID="fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c" Jan 23 17:59:22.049630 kubelet[3615]: I0123 17:59:22.049527 3615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c"} err="failed to get container status \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb9e517b99797d0af745226044444a677b3bd9739fc9029718d16f76fd6b7f9c\": not found" Jan 23 17:59:22.050496 kubelet[3615]: I0123 17:59:22.050395 3615 scope.go:117] "RemoveContainer" containerID="6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2" Jan 23 17:59:22.051203 containerd[2009]: time="2026-01-23T17:59:22.051135590Z" level=error msg="ContainerStatus for \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\": not found" Jan 23 17:59:22.051463 kubelet[3615]: E0123 17:59:22.051416 3615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\": not found" containerID="6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2" Jan 23 17:59:22.051549 kubelet[3615]: I0123 17:59:22.051470 3615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2"} err="failed to get container status \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"6393cdcdd6e80fd3acbf2fafad28e655de511deb9850e27b6f45b2d28eea50f2\": not found" Jan 23 17:59:22.051549 kubelet[3615]: I0123 17:59:22.051506 3615 scope.go:117] "RemoveContainer" containerID="e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e" Jan 23 17:59:22.051942 containerd[2009]: time="2026-01-23T17:59:22.051856034Z" level=error msg="ContainerStatus for \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\": not found" Jan 23 17:59:22.052411 kubelet[3615]: E0123 17:59:22.052370 3615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\": not found" containerID="e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e" Jan 23 17:59:22.052498 kubelet[3615]: I0123 17:59:22.052421 3615 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e"} err="failed to get container status \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0ef5325a100f5ad723c2000ade225bf6f92306bdd8b34b7d94d17a57081fa2e\": not found" Jan 23 17:59:22.078566 kubelet[3615]: I0123 17:59:22.078505 3615 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4r84h\" (UniqueName: \"kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-kube-api-access-4r84h\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078566 kubelet[3615]: I0123 17:59:22.078561 3615 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-lib-modules\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078585 3615 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hubble-tls\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078632 3615 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cni-path\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078657 3615 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-strwt\" (UniqueName: \"kubernetes.io/projected/b384ef83-9e1d-4367-8ae2-52bd56f6de81-kube-api-access-strwt\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078685 3615 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-xtables-lock\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078706 3615 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-cgroup\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078726 3615 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc9ec746-c5e8-4a11-9a02-9d7456ede611-clustermesh-secrets\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078745 3615 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-net\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.078803 kubelet[3615]: I0123 17:59:22.078766 3615 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-host-proc-sys-kernel\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.079162 kubelet[3615]: I0123 17:59:22.078786 3615 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-hostproc\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.079162 kubelet[3615]: I0123 17:59:22.078805 3615 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-etc-cni-netd\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.079162 kubelet[3615]: I0123 17:59:22.078824 3615 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b384ef83-9e1d-4367-8ae2-52bd56f6de81-cilium-config-path\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.079162 kubelet[3615]: I0123 17:59:22.078844 3615 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-config-path\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.079162 kubelet[3615]: I0123 17:59:22.078864 3615 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-bpf-maps\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.079162 kubelet[3615]: I0123 17:59:22.078885 3615 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc9ec746-c5e8-4a11-9a02-9d7456ede611-cilium-run\") on node \"ip-172-31-17-161\" DevicePath \"\"" Jan 23 17:59:22.216309 systemd[1]: Removed slice kubepods-besteffort-podb384ef83_9e1d_4367_8ae2_52bd56f6de81.slice - libcontainer container kubepods-besteffort-podb384ef83_9e1d_4367_8ae2_52bd56f6de81.slice. Jan 23 17:59:22.232380 systemd[1]: Removed slice kubepods-burstable-poddc9ec746_c5e8_4a11_9a02_9d7456ede611.slice - libcontainer container kubepods-burstable-poddc9ec746_c5e8_4a11_9a02_9d7456ede611.slice. Jan 23 17:59:22.232632 systemd[1]: kubepods-burstable-poddc9ec746_c5e8_4a11_9a02_9d7456ede611.slice: Consumed 14.458s CPU time, 126.6M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 17:59:22.620684 kubelet[3615]: E0123 17:59:22.612552 3615 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 17:59:22.621218 update_engine[1980]: I20260123 17:59:22.619787 1980 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:59:22.621218 update_engine[1980]: I20260123 17:59:22.619889 1980 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:59:22.621218 update_engine[1980]: I20260123 17:59:22.620432 1980 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:59:22.628722 update_engine[1980]: E20260123 17:59:22.627921 1980 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628036 1980 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628053 1980 omaha_request_action.cc:617] Omaha request response: Jan 23 17:59:22.628722 update_engine[1980]: E20260123 17:59:22.628165 1980 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628196 1980 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628211 1980 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628224 1980 update_attempter.cc:306] Processing Done. Jan 23 17:59:22.628722 update_engine[1980]: E20260123 17:59:22.628250 1980 update_attempter.cc:619] Update failed. Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628263 1980 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628276 1980 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628290 1980 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628398 1980 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628440 1980 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 17:59:22.628722 update_engine[1980]: I20260123 17:59:22.628456 1980 omaha_request_action.cc:272] Request: Jan 23 17:59:22.628722 update_engine[1980]: Jan 23 17:59:22.628722 update_engine[1980]: Jan 23 17:59:22.629505 update_engine[1980]: Jan 23 17:59:22.629505 update_engine[1980]: Jan 23 17:59:22.629505 update_engine[1980]: Jan 23 17:59:22.629505 update_engine[1980]: Jan 23 17:59:22.629505 update_engine[1980]: I20260123 17:59:22.628471 1980 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:59:22.629505 update_engine[1980]: I20260123 17:59:22.628510 1980 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:59:22.629505 update_engine[1980]: I20260123 17:59:22.629104 1980 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:59:22.629849 locksmithd[2029]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 17:59:22.630296 update_engine[1980]: E20260123 17:59:22.630163 1980 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:59:22.630296 update_engine[1980]: I20260123 17:59:22.630270 1980 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 17:59:22.630296 update_engine[1980]: I20260123 17:59:22.630288 1980 omaha_request_action.cc:617] Omaha request response: Jan 23 17:59:22.630443 update_engine[1980]: I20260123 17:59:22.630306 1980 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 17:59:22.630443 update_engine[1980]: I20260123 17:59:22.630318 1980 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 17:59:22.630443 update_engine[1980]: I20260123 17:59:22.630331 1980 update_attempter.cc:306] Processing Done. Jan 23 17:59:22.630443 update_engine[1980]: I20260123 17:59:22.630345 1980 update_attempter.cc:310] Error event sent. Jan 23 17:59:22.630443 update_engine[1980]: I20260123 17:59:22.630364 1980 update_check_scheduler.cc:74] Next update check in 48m29s Jan 23 17:59:22.630951 locksmithd[2029]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 17:59:22.656305 systemd[1]: var-lib-kubelet-pods-b384ef83\x2d9e1d\x2d4367\x2d8ae2\x2d52bd56f6de81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dstrwt.mount: Deactivated successfully. Jan 23 17:59:22.656473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa-shm.mount: Deactivated successfully. Jan 23 17:59:22.657139 systemd[1]: var-lib-kubelet-pods-dc9ec746\x2dc5e8\x2d4a11\x2d9a02\x2d9d7456ede611-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4r84h.mount: Deactivated successfully. Jan 23 17:59:22.657302 systemd[1]: var-lib-kubelet-pods-dc9ec746\x2dc5e8\x2d4a11\x2d9a02\x2d9d7456ede611-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 17:59:22.657432 systemd[1]: var-lib-kubelet-pods-dc9ec746\x2dc5e8\x2d4a11\x2d9a02\x2d9d7456ede611-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 17:59:23.365405 kubelet[3615]: I0123 17:59:23.365339 3615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b384ef83-9e1d-4367-8ae2-52bd56f6de81" path="/var/lib/kubelet/pods/b384ef83-9e1d-4367-8ae2-52bd56f6de81/volumes" Jan 23 17:59:23.366763 kubelet[3615]: I0123 17:59:23.366691 3615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9ec746-c5e8-4a11-9a02-9d7456ede611" path="/var/lib/kubelet/pods/dc9ec746-c5e8-4a11-9a02-9d7456ede611/volumes" Jan 23 17:59:23.482423 sshd[5172]: Connection closed by 68.220.241.50 port 33302 Jan 23 17:59:23.483402 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:23.495036 systemd-logind[1979]: Session 26 logged out. Waiting for processes to exit. Jan 23 17:59:23.495773 systemd[1]: sshd@25-172.31.17.161:22-68.220.241.50:33302.service: Deactivated successfully. Jan 23 17:59:23.499734 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 17:59:23.500468 systemd[1]: session-26.scope: Consumed 2.525s CPU time, 24.1M memory peak. Jan 23 17:59:23.504990 systemd-logind[1979]: Removed session 26. Jan 23 17:59:23.576384 systemd[1]: Started sshd@26-172.31.17.161:22-68.220.241.50:55998.service - OpenSSH per-connection server daemon (68.220.241.50:55998). Jan 23 17:59:24.092085 sshd[5323]: Accepted publickey for core from 68.220.241.50 port 55998 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:24.094926 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:24.104723 systemd-logind[1979]: New session 27 of user core. Jan 23 17:59:24.112888 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 17:59:24.207120 ntpd[2196]: Deleting 10 lxc_health, [fe80::b0cf:b3ff:fee2:37e7%8]:123, stats: received=0, sent=0, dropped=0, active_time=87 secs Jan 23 17:59:24.207821 ntpd[2196]: 23 Jan 17:59:24 ntpd[2196]: Deleting 10 lxc_health, [fe80::b0cf:b3ff:fee2:37e7%8]:123, stats: received=0, sent=0, dropped=0, active_time=87 secs Jan 23 17:59:26.121946 kubelet[3615]: I0123 17:59:26.121881 3615 memory_manager.go:355] "RemoveStaleState removing state" podUID="dc9ec746-c5e8-4a11-9a02-9d7456ede611" containerName="cilium-agent" Jan 23 17:59:26.121946 kubelet[3615]: I0123 17:59:26.121933 3615 memory_manager.go:355] "RemoveStaleState removing state" podUID="b384ef83-9e1d-4367-8ae2-52bd56f6de81" containerName="cilium-operator" Jan 23 17:59:26.138195 kubelet[3615]: I0123 17:59:26.138081 3615 status_manager.go:890] "Failed to get status for pod" podUID="6e014047-07c7-4901-9d92-7929cdd7983c" pod="kube-system/cilium-xb2rj" err="pods \"cilium-xb2rj\" is forbidden: User \"system:node:ip-172-31-17-161\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-161' and this object" Jan 23 17:59:26.138324 kubelet[3615]: W0123 17:59:26.138287 3615 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-17-161" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-161' and this object Jan 23 17:59:26.138378 kubelet[3615]: E0123 17:59:26.138332 3615 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-17-161\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-161' and this object" logger="UnhandledError" Jan 23 17:59:26.139970 kubelet[3615]: W0123 17:59:26.139908 3615 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-17-161" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-161' and this object Jan 23 17:59:26.140096 kubelet[3615]: E0123 17:59:26.139983 3615 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-17-161\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-161' and this object" logger="UnhandledError" Jan 23 17:59:26.140096 kubelet[3615]: W0123 17:59:26.139914 3615 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-17-161" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-161' and this object Jan 23 17:59:26.140096 kubelet[3615]: E0123 17:59:26.140030 3615 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-17-161\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-161' and this object" logger="UnhandledError" Jan 23 17:59:26.140990 kubelet[3615]: W0123 17:59:26.140853 3615 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-17-161" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-161' and this object Jan 23 17:59:26.140990 kubelet[3615]: E0123 17:59:26.140941 3615 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-17-161\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-161' and this object" logger="UnhandledError" Jan 23 17:59:26.146119 sshd[5326]: Connection closed by 68.220.241.50 port 55998 Jan 23 17:59:26.147730 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:26.148478 systemd[1]: Created slice kubepods-burstable-pod6e014047_07c7_4901_9d92_7929cdd7983c.slice - libcontainer container kubepods-burstable-pod6e014047_07c7_4901_9d92_7929cdd7983c.slice. Jan 23 17:59:26.166355 systemd[1]: sshd@26-172.31.17.161:22-68.220.241.50:55998.service: Deactivated successfully. Jan 23 17:59:26.175072 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 17:59:26.176585 systemd[1]: session-27.scope: Consumed 1.593s CPU time, 23.9M memory peak. Jan 23 17:59:26.178697 systemd-logind[1979]: Session 27 logged out. Waiting for processes to exit. Jan 23 17:59:26.188696 systemd-logind[1979]: Removed session 27. Jan 23 17:59:26.204252 kubelet[3615]: I0123 17:59:26.204180 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-bpf-maps\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204252 kubelet[3615]: I0123 17:59:26.204252 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-etc-cni-netd\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204454 kubelet[3615]: I0123 17:59:26.204292 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e014047-07c7-4901-9d92-7929cdd7983c-cilium-config-path\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204454 kubelet[3615]: I0123 17:59:26.204328 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-host-proc-sys-net\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204454 kubelet[3615]: I0123 17:59:26.204362 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7d8k\" (UniqueName: \"kubernetes.io/projected/6e014047-07c7-4901-9d92-7929cdd7983c-kube-api-access-n7d8k\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204454 kubelet[3615]: I0123 17:59:26.204404 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-xtables-lock\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204454 kubelet[3615]: I0123 17:59:26.204438 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-cilium-run\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204816 kubelet[3615]: I0123 17:59:26.204479 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-hostproc\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204816 kubelet[3615]: I0123 17:59:26.204515 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e014047-07c7-4901-9d92-7929cdd7983c-cilium-ipsec-secrets\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204816 kubelet[3615]: I0123 17:59:26.204567 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e014047-07c7-4901-9d92-7929cdd7983c-hubble-tls\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204816 kubelet[3615]: I0123 17:59:26.204626 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-cilium-cgroup\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204816 kubelet[3615]: I0123 17:59:26.204672 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-cni-path\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.204816 kubelet[3615]: I0123 17:59:26.204706 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e014047-07c7-4901-9d92-7929cdd7983c-clustermesh-secrets\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.205142 kubelet[3615]: I0123 17:59:26.204738 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-host-proc-sys-kernel\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.205142 kubelet[3615]: I0123 17:59:26.204778 3615 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e014047-07c7-4901-9d92-7929cdd7983c-lib-modules\") pod \"cilium-xb2rj\" (UID: \"6e014047-07c7-4901-9d92-7929cdd7983c\") " pod="kube-system/cilium-xb2rj" Jan 23 17:59:26.242088 systemd[1]: Started sshd@27-172.31.17.161:22-68.220.241.50:56014.service - OpenSSH per-connection server daemon (68.220.241.50:56014). Jan 23 17:59:26.758860 sshd[5336]: Accepted publickey for core from 68.220.241.50 port 56014 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:26.761087 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:26.771709 systemd-logind[1979]: New session 28 of user core. Jan 23 17:59:26.773890 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 17:59:27.106093 sshd[5340]: Connection closed by 68.220.241.50 port 56014 Jan 23 17:59:27.106920 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:27.116034 systemd[1]: sshd@27-172.31.17.161:22-68.220.241.50:56014.service: Deactivated successfully. Jan 23 17:59:27.120491 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 17:59:27.123027 systemd-logind[1979]: Session 28 logged out. Waiting for processes to exit. Jan 23 17:59:27.127052 systemd-logind[1979]: Removed session 28. Jan 23 17:59:27.211839 systemd[1]: Started sshd@28-172.31.17.161:22-68.220.241.50:56018.service - OpenSSH per-connection server daemon (68.220.241.50:56018). Jan 23 17:59:27.306284 kubelet[3615]: E0123 17:59:27.306218 3615 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 23 17:59:27.307091 kubelet[3615]: E0123 17:59:27.306353 3615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e014047-07c7-4901-9d92-7929cdd7983c-cilium-ipsec-secrets podName:6e014047-07c7-4901-9d92-7929cdd7983c nodeName:}" failed. No retries permitted until 2026-01-23 17:59:27.806319076 +0000 UTC m=+120.743294946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/6e014047-07c7-4901-9d92-7929cdd7983c-cilium-ipsec-secrets") pod "cilium-xb2rj" (UID: "6e014047-07c7-4901-9d92-7929cdd7983c") : failed to sync secret cache: timed out waiting for the condition Jan 23 17:59:27.404654 containerd[2009]: time="2026-01-23T17:59:27.404416653Z" level=info msg="StopPodSandbox for \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\"" Jan 23 17:59:27.405653 containerd[2009]: time="2026-01-23T17:59:27.405368769Z" level=info msg="TearDown network for sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" successfully" Jan 23 17:59:27.405653 containerd[2009]: time="2026-01-23T17:59:27.405413841Z" level=info msg="StopPodSandbox for \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" returns successfully" Jan 23 17:59:27.406669 containerd[2009]: time="2026-01-23T17:59:27.406585689Z" level=info msg="RemovePodSandbox for \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\"" Jan 23 17:59:27.406795 containerd[2009]: time="2026-01-23T17:59:27.406680141Z" level=info msg="Forcibly stopping sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\"" Jan 23 17:59:27.406848 containerd[2009]: time="2026-01-23T17:59:27.406820637Z" level=info msg="TearDown network for sandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" successfully" Jan 23 17:59:27.409433 containerd[2009]: time="2026-01-23T17:59:27.409378077Z" level=info msg="Ensure that sandbox e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa in task-service has been cleanup successfully" Jan 23 17:59:27.416227 containerd[2009]: time="2026-01-23T17:59:27.416149665Z" level=info msg="RemovePodSandbox \"e3c0d590b65170db6a16ff3dec31a348ed2e8bb416f61bd1653d109f38283caa\" returns successfully" Jan 23 17:59:27.416943 containerd[2009]: time="2026-01-23T17:59:27.416842185Z" level=info msg="StopPodSandbox for \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\"" Jan 23 17:59:27.417357 containerd[2009]: time="2026-01-23T17:59:27.417321969Z" level=info msg="TearDown network for sandbox \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" successfully" Jan 23 17:59:27.417468 containerd[2009]: time="2026-01-23T17:59:27.417442881Z" level=info msg="StopPodSandbox for \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" returns successfully" Jan 23 17:59:27.418111 containerd[2009]: time="2026-01-23T17:59:27.418054653Z" level=info msg="RemovePodSandbox for \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\"" Jan 23 17:59:27.418226 containerd[2009]: time="2026-01-23T17:59:27.418111509Z" level=info msg="Forcibly stopping sandbox \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\"" Jan 23 17:59:27.418276 containerd[2009]: time="2026-01-23T17:59:27.418244553Z" level=info msg="TearDown network for sandbox \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" successfully" Jan 23 17:59:27.420242 containerd[2009]: time="2026-01-23T17:59:27.420193653Z" level=info msg="Ensure that sandbox dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680 in task-service has been cleanup successfully" Jan 23 17:59:27.428507 containerd[2009]: time="2026-01-23T17:59:27.428388009Z" level=info msg="RemovePodSandbox \"dcad3894f8eecbb874ca9d176159429905848bcfae41684df9317f02c1489680\" returns successfully" Jan 23 17:59:27.614459 kubelet[3615]: E0123 17:59:27.614417 3615 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 17:59:27.771948 sshd[5349]: Accepted publickey for core from 68.220.241.50 port 56018 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:27.774805 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:27.782495 systemd-logind[1979]: New session 29 of user core. Jan 23 17:59:27.790888 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 17:59:27.962262 containerd[2009]: time="2026-01-23T17:59:27.962206128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xb2rj,Uid:6e014047-07c7-4901-9d92-7929cdd7983c,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:27.996656 containerd[2009]: time="2026-01-23T17:59:27.996234552Z" level=info msg="connecting to shim 3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9" address="unix:///run/containerd/s/023f5d2d155e66f62150e77bbef0aacdc4a5fe68cd416b51b0388db6e0854b0c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:28.050081 systemd[1]: Started cri-containerd-3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9.scope - libcontainer container 3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9. Jan 23 17:59:28.127741 containerd[2009]: time="2026-01-23T17:59:28.127585136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xb2rj,Uid:6e014047-07c7-4901-9d92-7929cdd7983c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\"" Jan 23 17:59:28.136668 containerd[2009]: time="2026-01-23T17:59:28.135986229Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 17:59:28.163737 containerd[2009]: time="2026-01-23T17:59:28.163688985Z" level=info msg="Container 4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:28.192852 containerd[2009]: time="2026-01-23T17:59:28.192667257Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290\"" Jan 23 17:59:28.196320 containerd[2009]: time="2026-01-23T17:59:28.195484893Z" level=info msg="StartContainer for \"4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290\"" Jan 23 17:59:28.202198 containerd[2009]: time="2026-01-23T17:59:28.202131669Z" level=info msg="connecting to shim 4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290" address="unix:///run/containerd/s/023f5d2d155e66f62150e77bbef0aacdc4a5fe68cd416b51b0388db6e0854b0c" protocol=ttrpc version=3 Jan 23 17:59:28.274474 systemd[1]: Started cri-containerd-4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290.scope - libcontainer container 4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290. Jan 23 17:59:28.338116 containerd[2009]: time="2026-01-23T17:59:28.337965466Z" level=info msg="StartContainer for \"4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290\" returns successfully" Jan 23 17:59:28.354304 systemd[1]: cri-containerd-4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290.scope: Deactivated successfully. Jan 23 17:59:28.361508 containerd[2009]: time="2026-01-23T17:59:28.361345918Z" level=info msg="received container exit event container_id:\"4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290\" id:\"4237036557c0b57dfabb9b0f9d1408aed7ebbdf36704ead36a5dbd2e802eb290\" pid:5419 exited_at:{seconds:1769191168 nanos:360835114}" Jan 23 17:59:28.951946 containerd[2009]: time="2026-01-23T17:59:28.951691369Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 17:59:28.976668 containerd[2009]: time="2026-01-23T17:59:28.976271605Z" level=info msg="Container dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:28.993086 containerd[2009]: time="2026-01-23T17:59:28.992983561Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c\"" Jan 23 17:59:28.995323 containerd[2009]: time="2026-01-23T17:59:28.995215285Z" level=info msg="StartContainer for \"dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c\"" Jan 23 17:59:28.997210 containerd[2009]: time="2026-01-23T17:59:28.997147069Z" level=info msg="connecting to shim dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c" address="unix:///run/containerd/s/023f5d2d155e66f62150e77bbef0aacdc4a5fe68cd416b51b0388db6e0854b0c" protocol=ttrpc version=3 Jan 23 17:59:29.065722 systemd[1]: Started cri-containerd-dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c.scope - libcontainer container dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c. Jan 23 17:59:29.258171 containerd[2009]: time="2026-01-23T17:59:29.256876018Z" level=info msg="StartContainer for \"dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c\" returns successfully" Jan 23 17:59:29.273394 systemd[1]: cri-containerd-dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c.scope: Deactivated successfully. Jan 23 17:59:29.278097 containerd[2009]: time="2026-01-23T17:59:29.278010214Z" level=info msg="received container exit event container_id:\"dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c\" id:\"dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c\" pid:5465 exited_at:{seconds:1769191169 nanos:277364590}" Jan 23 17:59:29.315287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd04670a12c35552bed7d6e421ab6d944588774429d8cc571536d605cb8c781c-rootfs.mount: Deactivated successfully. Jan 23 17:59:29.958899 containerd[2009]: time="2026-01-23T17:59:29.957200138Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 17:59:29.980979 containerd[2009]: time="2026-01-23T17:59:29.980924066Z" level=info msg="Container 25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:30.006067 containerd[2009]: time="2026-01-23T17:59:30.006015634Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443\"" Jan 23 17:59:30.008408 containerd[2009]: time="2026-01-23T17:59:30.008335258Z" level=info msg="StartContainer for \"25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443\"" Jan 23 17:59:30.013193 containerd[2009]: time="2026-01-23T17:59:30.013131514Z" level=info msg="connecting to shim 25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443" address="unix:///run/containerd/s/023f5d2d155e66f62150e77bbef0aacdc4a5fe68cd416b51b0388db6e0854b0c" protocol=ttrpc version=3 Jan 23 17:59:30.066057 systemd[1]: Started cri-containerd-25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443.scope - libcontainer container 25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443. Jan 23 17:59:30.172202 containerd[2009]: time="2026-01-23T17:59:30.172137971Z" level=info msg="StartContainer for \"25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443\" returns successfully" Jan 23 17:59:30.178436 systemd[1]: cri-containerd-25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443.scope: Deactivated successfully. Jan 23 17:59:30.183314 containerd[2009]: time="2026-01-23T17:59:30.183257771Z" level=info msg="received container exit event container_id:\"25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443\" id:\"25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443\" pid:5510 exited_at:{seconds:1769191170 nanos:182891903}" Jan 23 17:59:30.229472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25fd889c9ccda31ace3e65f7513974d60746e4a0a4a113695ac8a1e7a3de7443-rootfs.mount: Deactivated successfully. Jan 23 17:59:30.513922 kubelet[3615]: I0123 17:59:30.513400 3615 setters.go:602] "Node became not ready" node="ip-172-31-17-161" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T17:59:30Z","lastTransitionTime":"2026-01-23T17:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 17:59:30.971021 containerd[2009]: time="2026-01-23T17:59:30.970960635Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 17:59:30.994629 containerd[2009]: time="2026-01-23T17:59:30.994499343Z" level=info msg="Container 87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:31.021176 containerd[2009]: time="2026-01-23T17:59:31.021088979Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2\"" Jan 23 17:59:31.022393 containerd[2009]: time="2026-01-23T17:59:31.022349603Z" level=info msg="StartContainer for \"87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2\"" Jan 23 17:59:31.024869 containerd[2009]: time="2026-01-23T17:59:31.024816791Z" level=info msg="connecting to shim 87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2" address="unix:///run/containerd/s/023f5d2d155e66f62150e77bbef0aacdc4a5fe68cd416b51b0388db6e0854b0c" protocol=ttrpc version=3 Jan 23 17:59:31.076222 systemd[1]: Started cri-containerd-87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2.scope - libcontainer container 87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2. Jan 23 17:59:31.131977 systemd[1]: cri-containerd-87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2.scope: Deactivated successfully. Jan 23 17:59:31.136144 containerd[2009]: time="2026-01-23T17:59:31.135933707Z" level=info msg="received container exit event container_id:\"87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2\" id:\"87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2\" pid:5551 exited_at:{seconds:1769191171 nanos:134314211}" Jan 23 17:59:31.152407 containerd[2009]: time="2026-01-23T17:59:31.152349683Z" level=info msg="StartContainer for \"87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2\" returns successfully" Jan 23 17:59:31.179466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87361f00890002e3c5e715f299b3aa4727ec149e55c4214a9aa0b3f0f982d7d2-rootfs.mount: Deactivated successfully. Jan 23 17:59:31.976484 containerd[2009]: time="2026-01-23T17:59:31.976412800Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 17:59:32.004632 containerd[2009]: time="2026-01-23T17:59:32.004418220Z" level=info msg="Container 578f33e3a5329a49bf05dce6882436a90ce09d777767509a1f4ac3243f47642c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:32.026623 containerd[2009]: time="2026-01-23T17:59:32.026486664Z" level=info msg="CreateContainer within sandbox \"3e7fc5afd1cab5a423970125a7a992fc3450ed6ef29918662dfb8847217ae6d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"578f33e3a5329a49bf05dce6882436a90ce09d777767509a1f4ac3243f47642c\"" Jan 23 17:59:32.028444 containerd[2009]: time="2026-01-23T17:59:32.028357620Z" level=info msg="StartContainer for \"578f33e3a5329a49bf05dce6882436a90ce09d777767509a1f4ac3243f47642c\"" Jan 23 17:59:32.030315 containerd[2009]: time="2026-01-23T17:59:32.030247908Z" level=info msg="connecting to shim 578f33e3a5329a49bf05dce6882436a90ce09d777767509a1f4ac3243f47642c" address="unix:///run/containerd/s/023f5d2d155e66f62150e77bbef0aacdc4a5fe68cd416b51b0388db6e0854b0c" protocol=ttrpc version=3 Jan 23 17:59:32.072159 systemd[1]: Started cri-containerd-578f33e3a5329a49bf05dce6882436a90ce09d777767509a1f4ac3243f47642c.scope - libcontainer container 578f33e3a5329a49bf05dce6882436a90ce09d777767509a1f4ac3243f47642c. Jan 23 17:59:32.158998 containerd[2009]: time="2026-01-23T17:59:32.158935512Z" level=info msg="StartContainer for \"578f33e3a5329a49bf05dce6882436a90ce09d777767509a1f4ac3243f47642c\" returns successfully" Jan 23 17:59:33.033956 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 17:59:34.951827 kubelet[3615]: E0123 17:59:34.951644 3615 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:52570->127.0.0.1:44219: read tcp 127.0.0.1:52570->127.0.0.1:44219: read: connection reset by peer Jan 23 17:59:37.320137 systemd-networkd[1830]: lxc_health: Link UP Jan 23 17:59:37.337876 (udev-worker)[6133]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:37.370508 systemd-networkd[1830]: lxc_health: Gained carrier Jan 23 17:59:38.001470 kubelet[3615]: I0123 17:59:38.001347 3615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xb2rj" podStartSLOduration=12.00132375 podStartE2EDuration="12.00132375s" podCreationTimestamp="2026-01-23 17:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:33.038593477 +0000 UTC m=+125.975569359" watchObservedRunningTime="2026-01-23 17:59:38.00132375 +0000 UTC m=+130.938299620" Jan 23 17:59:39.051854 systemd-networkd[1830]: lxc_health: Gained IPv6LL Jan 23 17:59:41.207144 ntpd[2196]: Listen normally on 13 lxc_health [fe80::7869:45ff:fe8f:a72a%14]:123 Jan 23 17:59:41.207768 ntpd[2196]: 23 Jan 17:59:41 ntpd[2196]: Listen normally on 13 lxc_health [fe80::7869:45ff:fe8f:a72a%14]:123 Jan 23 17:59:43.951943 kubelet[3615]: E0123 17:59:43.951738 3615 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57020->127.0.0.1:44219: write tcp 127.0.0.1:57020->127.0.0.1:44219: write: broken pipe Jan 23 17:59:44.039052 sshd[5354]: Connection closed by 68.220.241.50 port 56018 Jan 23 17:59:44.039993 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:44.055145 systemd[1]: sshd@28-172.31.17.161:22-68.220.241.50:56018.service: Deactivated successfully. Jan 23 17:59:44.061435 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 17:59:44.065738 systemd-logind[1979]: Session 29 logged out. Waiting for processes to exit. Jan 23 17:59:44.068863 systemd-logind[1979]: Removed session 29. Jan 23 17:59:58.151508 systemd[1]: cri-containerd-53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63.scope: Deactivated successfully. Jan 23 17:59:58.152103 systemd[1]: cri-containerd-53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63.scope: Consumed 5.053s CPU time, 51.8M memory peak. Jan 23 17:59:58.158320 containerd[2009]: time="2026-01-23T17:59:58.158187062Z" level=info msg="received container exit event container_id:\"53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63\" id:\"53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63\" pid:3430 exit_status:1 exited_at:{seconds:1769191198 nanos:157388054}" Jan 23 17:59:58.199207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63-rootfs.mount: Deactivated successfully. Jan 23 17:59:59.089631 kubelet[3615]: I0123 17:59:59.089537 3615 scope.go:117] "RemoveContainer" containerID="53e5f09e4a22f44da53c0ba67298ad88e77d99595fb356b14320e44bd1211a63" Jan 23 17:59:59.094558 containerd[2009]: time="2026-01-23T17:59:59.094128446Z" level=info msg="CreateContainer within sandbox \"b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 17:59:59.109113 containerd[2009]: time="2026-01-23T17:59:59.109060886Z" level=info msg="Container 477eedccb6747c3abf732da940047510c50ca26ef81f74030c074c5af3050e58: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:59.121420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385581772.mount: Deactivated successfully. Jan 23 17:59:59.133184 containerd[2009]: time="2026-01-23T17:59:59.133019486Z" level=info msg="CreateContainer within sandbox \"b0762eec3c636c3447bbc62dc816f02f3b2effc8e882b33152a195d9c01198b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"477eedccb6747c3abf732da940047510c50ca26ef81f74030c074c5af3050e58\"" Jan 23 17:59:59.134377 containerd[2009]: time="2026-01-23T17:59:59.134310314Z" level=info msg="StartContainer for \"477eedccb6747c3abf732da940047510c50ca26ef81f74030c074c5af3050e58\"" Jan 23 17:59:59.137253 containerd[2009]: time="2026-01-23T17:59:59.137146526Z" level=info msg="connecting to shim 477eedccb6747c3abf732da940047510c50ca26ef81f74030c074c5af3050e58" address="unix:///run/containerd/s/4f0d9d6e3aa73bfa26451fd08e64d2465719da5e2cc788fbaac4fda8945419f1" protocol=ttrpc version=3 Jan 23 17:59:59.172908 systemd[1]: Started cri-containerd-477eedccb6747c3abf732da940047510c50ca26ef81f74030c074c5af3050e58.scope - libcontainer container 477eedccb6747c3abf732da940047510c50ca26ef81f74030c074c5af3050e58. Jan 23 17:59:59.255055 containerd[2009]: time="2026-01-23T17:59:59.254904519Z" level=info msg="StartContainer for \"477eedccb6747c3abf732da940047510c50ca26ef81f74030c074c5af3050e58\" returns successfully" Jan 23 18:00:00.524461 kubelet[3615]: E0123 18:00:00.524402 3615 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-17-161)" Jan 23 18:00:03.519379 systemd[1]: cri-containerd-b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b.scope: Deactivated successfully. Jan 23 18:00:03.520737 systemd[1]: cri-containerd-b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b.scope: Consumed 3.698s CPU time, 20.1M memory peak. Jan 23 18:00:03.525581 containerd[2009]: time="2026-01-23T18:00:03.525385436Z" level=info msg="received container exit event container_id:\"b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b\" id:\"b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b\" pid:3465 exit_status:1 exited_at:{seconds:1769191203 nanos:524598080}" Jan 23 18:00:03.572358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b-rootfs.mount: Deactivated successfully. Jan 23 18:00:04.112125 kubelet[3615]: I0123 18:00:04.112068 3615 scope.go:117] "RemoveContainer" containerID="b8283f05825578b17b7efafdf8649bdb9a8657cf1fa265460c5b96befde3ab8b" Jan 23 18:00:04.115652 containerd[2009]: time="2026-01-23T18:00:04.115515811Z" level=info msg="CreateContainer within sandbox \"f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:00:04.146291 containerd[2009]: time="2026-01-23T18:00:04.146208415Z" level=info msg="Container 4eb36f3084012c59816abe37ecddf6c3af15c5e128c49d22daa5359b82214d4c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:04.163293 containerd[2009]: time="2026-01-23T18:00:04.163216231Z" level=info msg="CreateContainer within sandbox \"f0e325d1cd615b7dad7f64d72b5ccbdf9b92c5f9fac6d5be22a4b1aa95cff1e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4eb36f3084012c59816abe37ecddf6c3af15c5e128c49d22daa5359b82214d4c\"" Jan 23 18:00:04.164257 containerd[2009]: time="2026-01-23T18:00:04.164210191Z" level=info msg="StartContainer for \"4eb36f3084012c59816abe37ecddf6c3af15c5e128c49d22daa5359b82214d4c\"" Jan 23 18:00:04.166281 containerd[2009]: time="2026-01-23T18:00:04.166218631Z" level=info msg="connecting to shim 4eb36f3084012c59816abe37ecddf6c3af15c5e128c49d22daa5359b82214d4c" address="unix:///run/containerd/s/69a963a6b4d9a6b1e7a5c77872fa5d0f9a1da601d04c2388778d4dd534de72ba" protocol=ttrpc version=3 Jan 23 18:00:04.206253 systemd[1]: Started cri-containerd-4eb36f3084012c59816abe37ecddf6c3af15c5e128c49d22daa5359b82214d4c.scope - libcontainer container 4eb36f3084012c59816abe37ecddf6c3af15c5e128c49d22daa5359b82214d4c. Jan 23 18:00:04.288020 containerd[2009]: time="2026-01-23T18:00:04.287929100Z" level=info msg="StartContainer for \"4eb36f3084012c59816abe37ecddf6c3af15c5e128c49d22daa5359b82214d4c\" returns successfully" Jan 23 18:00:10.525155 kubelet[3615]: E0123 18:00:10.525086 3615 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-161?timeout=10s\": context deadline exceeded" Jan 23 18:00:20.526512 kubelet[3615]: E0123 18:00:20.526170 3615 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-161?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"