Jan 16 23:59:51.242630 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 16 23:59:51.242676 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 16 23:59:51.242702 kernel: KASLR disabled due to lack of seed Jan 16 23:59:51.242719 kernel: efi: EFI v2.7 by EDK II Jan 16 23:59:51.242736 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 16 23:59:51.242752 kernel: ACPI: Early table checksum verification disabled Jan 16 23:59:51.242770 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 16 23:59:51.242786 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 16 23:59:51.242802 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 16 23:59:51.242818 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 16 23:59:51.242839 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 16 23:59:51.242856 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 16 23:59:51.242872 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 16 23:59:51.242888 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 16 23:59:51.242907 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 16 23:59:51.242929 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 16 23:59:51.242946 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 16 23:59:51.245050 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 16 23:59:51.245075 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 16 23:59:51.245095 kernel: printk: bootconsole [uart0] enabled Jan 16 23:59:51.245113 kernel: NUMA: Failed to initialise from firmware Jan 16 23:59:51.245132 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 23:59:51.245150 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 16 23:59:51.245167 kernel: Zone ranges: Jan 16 23:59:51.245186 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 23:59:51.245204 kernel: DMA32 empty Jan 16 23:59:51.245234 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 16 23:59:51.245252 kernel: Movable zone start for each node Jan 16 23:59:51.245270 kernel: Early memory node ranges Jan 16 23:59:51.245289 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 16 23:59:51.245307 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 16 23:59:51.245325 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 16 23:59:51.245342 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 16 23:59:51.245359 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 16 23:59:51.245379 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 16 23:59:51.245398 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 16 23:59:51.245416 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 16 23:59:51.245432 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 23:59:51.245455 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 16 23:59:51.245473 kernel: psci: probing for conduit method from ACPI. Jan 16 23:59:51.245498 kernel: psci: PSCIv1.0 detected in firmware. Jan 16 23:59:51.245516 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 23:59:51.245534 kernel: psci: Trusted OS migration not required Jan 16 23:59:51.245557 kernel: psci: SMC Calling Convention v1.1 Jan 16 23:59:51.245575 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 16 23:59:51.245593 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 16 23:59:51.245611 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 16 23:59:51.245631 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 23:59:51.245650 kernel: Detected PIPT I-cache on CPU0 Jan 16 23:59:51.245668 kernel: CPU features: detected: GIC system register CPU interface Jan 16 23:59:51.245686 kernel: CPU features: detected: Spectre-v2 Jan 16 23:59:51.245705 kernel: CPU features: detected: Spectre-v3a Jan 16 23:59:51.245724 kernel: CPU features: detected: Spectre-BHB Jan 16 23:59:51.245743 kernel: CPU features: detected: ARM erratum 1742098 Jan 16 23:59:51.245766 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 16 23:59:51.245785 kernel: alternatives: applying boot alternatives Jan 16 23:59:51.245806 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:59:51.245825 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 23:59:51.245845 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 23:59:51.245864 kernel: Fallback order for Node 0: 0 Jan 16 23:59:51.245882 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 16 23:59:51.245899 kernel: Policy zone: Normal Jan 16 23:59:51.245917 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 23:59:51.245935 kernel: software IO TLB: area num 2. Jan 16 23:59:51.245984 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 16 23:59:51.246022 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 16 23:59:51.246041 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 23:59:51.246059 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 23:59:51.246078 kernel: rcu: RCU event tracing is enabled. Jan 16 23:59:51.246097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 23:59:51.246115 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 23:59:51.246133 kernel: Tracing variant of Tasks RCU enabled. Jan 16 23:59:51.246152 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 23:59:51.246169 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 23:59:51.246187 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 23:59:51.246205 kernel: GICv3: 96 SPIs implemented Jan 16 23:59:51.246227 kernel: GICv3: 0 Extended SPIs implemented Jan 16 23:59:51.246245 kernel: Root IRQ handler: gic_handle_irq Jan 16 23:59:51.246262 kernel: GICv3: GICv3 features: 16 PPIs Jan 16 23:59:51.246280 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 16 23:59:51.246298 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 16 23:59:51.246316 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 23:59:51.246335 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 16 23:59:51.246353 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 16 23:59:51.246370 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 16 23:59:51.246388 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 16 23:59:51.246406 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 23:59:51.246424 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 16 23:59:51.246446 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 16 23:59:51.246464 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 16 23:59:51.246482 kernel: Console: colour dummy device 80x25 Jan 16 23:59:51.246501 kernel: printk: console [tty1] enabled Jan 16 23:59:51.246519 kernel: ACPI: Core revision 20230628 Jan 16 23:59:51.246538 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 16 23:59:51.246556 kernel: pid_max: default: 32768 minimum: 301 Jan 16 23:59:51.246574 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 23:59:51.246593 kernel: landlock: Up and running. Jan 16 23:59:51.246616 kernel: SELinux: Initializing. Jan 16 23:59:51.246634 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:59:51.246652 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:59:51.246671 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:59:51.246690 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:59:51.246708 kernel: rcu: Hierarchical SRCU implementation. Jan 16 23:59:51.246726 kernel: rcu: Max phase no-delay instances is 400. Jan 16 23:59:51.246744 kernel: Platform MSI: ITS@0x10080000 domain created Jan 16 23:59:51.246762 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 16 23:59:51.246785 kernel: Remapping and enabling EFI services. Jan 16 23:59:51.246803 kernel: smp: Bringing up secondary CPUs ... Jan 16 23:59:51.246821 kernel: Detected PIPT I-cache on CPU1 Jan 16 23:59:51.246839 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 16 23:59:51.246857 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 16 23:59:51.246875 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 16 23:59:51.246893 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 23:59:51.246911 kernel: SMP: Total of 2 processors activated. Jan 16 23:59:51.246929 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 23:59:51.248806 kernel: CPU features: detected: 32-bit EL1 Support Jan 16 23:59:51.249208 kernel: CPU features: detected: CRC32 instructions Jan 16 23:59:51.249584 kernel: CPU: All CPU(s) started at EL1 Jan 16 23:59:51.249675 kernel: alternatives: applying system-wide alternatives Jan 16 23:59:51.249703 kernel: devtmpfs: initialized Jan 16 23:59:51.249723 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 23:59:51.249743 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 23:59:51.249762 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 23:59:51.249781 kernel: SMBIOS 3.0.0 present. Jan 16 23:59:51.249805 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 16 23:59:51.249825 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 23:59:51.249844 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 23:59:51.249864 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 23:59:51.249883 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 23:59:51.249903 kernel: audit: initializing netlink subsys (disabled) Jan 16 23:59:51.249922 kernel: audit: type=2000 audit(0.284:1): state=initialized audit_enabled=0 res=1 Jan 16 23:59:51.249942 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 23:59:51.252042 kernel: cpuidle: using governor menu Jan 16 23:59:51.252067 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 23:59:51.252088 kernel: ASID allocator initialised with 65536 entries Jan 16 23:59:51.252108 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 23:59:51.252127 kernel: Serial: AMBA PL011 UART driver Jan 16 23:59:51.252147 kernel: Modules: 17488 pages in range for non-PLT usage Jan 16 23:59:51.252166 kernel: Modules: 509008 pages in range for PLT usage Jan 16 23:59:51.252185 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 23:59:51.252204 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 23:59:51.252234 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 23:59:51.252254 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 23:59:51.252273 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 23:59:51.252292 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 23:59:51.252311 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 23:59:51.252330 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 23:59:51.252349 kernel: ACPI: Added _OSI(Module Device) Jan 16 23:59:51.252368 kernel: ACPI: Added _OSI(Processor Device) Jan 16 23:59:51.252387 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 23:59:51.252411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 23:59:51.252430 kernel: ACPI: Interpreter enabled Jan 16 23:59:51.252449 kernel: ACPI: Using GIC for interrupt routing Jan 16 23:59:51.252468 kernel: ACPI: MCFG table detected, 1 entries Jan 16 23:59:51.252488 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 16 23:59:51.252811 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 23:59:51.253090 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 23:59:51.253300 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 23:59:51.253509 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 16 23:59:51.253711 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 16 23:59:51.253737 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 16 23:59:51.253757 kernel: acpiphp: Slot [1] registered Jan 16 23:59:51.253776 kernel: acpiphp: Slot [2] registered Jan 16 23:59:51.253796 kernel: acpiphp: Slot [3] registered Jan 16 23:59:51.253815 kernel: acpiphp: Slot [4] registered Jan 16 23:59:51.253834 kernel: acpiphp: Slot [5] registered Jan 16 23:59:51.253859 kernel: acpiphp: Slot [6] registered Jan 16 23:59:51.253878 kernel: acpiphp: Slot [7] registered Jan 16 23:59:51.253897 kernel: acpiphp: Slot [8] registered Jan 16 23:59:51.253916 kernel: acpiphp: Slot [9] registered Jan 16 23:59:51.253934 kernel: acpiphp: Slot [10] registered Jan 16 23:59:51.255714 kernel: acpiphp: Slot [11] registered Jan 16 23:59:51.255759 kernel: acpiphp: Slot [12] registered Jan 16 23:59:51.255779 kernel: acpiphp: Slot [13] registered Jan 16 23:59:51.255800 kernel: acpiphp: Slot [14] registered Jan 16 23:59:51.255820 kernel: acpiphp: Slot [15] registered Jan 16 23:59:51.255849 kernel: acpiphp: Slot [16] registered Jan 16 23:59:51.255869 kernel: acpiphp: Slot [17] registered Jan 16 23:59:51.255889 kernel: acpiphp: Slot [18] registered Jan 16 23:59:51.255908 kernel: acpiphp: Slot [19] registered Jan 16 23:59:51.255928 kernel: acpiphp: Slot [20] registered Jan 16 23:59:51.255948 kernel: acpiphp: Slot [21] registered Jan 16 23:59:51.256139 kernel: acpiphp: Slot [22] registered Jan 16 23:59:51.256160 kernel: acpiphp: Slot [23] registered Jan 16 23:59:51.256180 kernel: acpiphp: Slot [24] registered Jan 16 23:59:51.256206 kernel: acpiphp: Slot [25] registered Jan 16 23:59:51.256227 kernel: acpiphp: Slot [26] registered Jan 16 23:59:51.256247 kernel: acpiphp: Slot [27] registered Jan 16 23:59:51.256267 kernel: acpiphp: Slot [28] registered Jan 16 23:59:51.256286 kernel: acpiphp: Slot [29] registered Jan 16 23:59:51.256305 kernel: acpiphp: Slot [30] registered Jan 16 23:59:51.256326 kernel: acpiphp: Slot [31] registered Jan 16 23:59:51.256346 kernel: PCI host bridge to bus 0000:00 Jan 16 23:59:51.256604 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 16 23:59:51.256806 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 23:59:51.257032 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 16 23:59:51.257222 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 16 23:59:51.257461 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 16 23:59:51.257694 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 16 23:59:51.257909 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 16 23:59:51.258172 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 16 23:59:51.258387 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 16 23:59:51.258599 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 23:59:51.258844 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 16 23:59:51.259122 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 16 23:59:51.259365 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 16 23:59:51.259581 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 16 23:59:51.259801 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 23:59:51.260022 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 16 23:59:51.260222 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 23:59:51.260412 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 16 23:59:51.260438 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 23:59:51.260459 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 23:59:51.260479 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 23:59:51.260498 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 23:59:51.260524 kernel: iommu: Default domain type: Translated Jan 16 23:59:51.260544 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 23:59:51.260564 kernel: efivars: Registered efivars operations Jan 16 23:59:51.260582 kernel: vgaarb: loaded Jan 16 23:59:51.260601 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 23:59:51.260621 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 23:59:51.260640 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 23:59:51.260659 kernel: pnp: PnP ACPI init Jan 16 23:59:51.260881 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 16 23:59:51.260915 kernel: pnp: PnP ACPI: found 1 devices Jan 16 23:59:51.260935 kernel: NET: Registered PF_INET protocol family Jan 16 23:59:51.260971 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 23:59:51.261018 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 23:59:51.261038 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 23:59:51.261058 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 23:59:51.261078 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 23:59:51.261097 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 23:59:51.261123 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:59:51.261143 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:59:51.261162 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 23:59:51.261181 kernel: PCI: CLS 0 bytes, default 64 Jan 16 23:59:51.261202 kernel: kvm [1]: HYP mode not available Jan 16 23:59:51.261223 kernel: Initialise system trusted keyrings Jan 16 23:59:51.261244 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 23:59:51.261264 kernel: Key type asymmetric registered Jan 16 23:59:51.261285 kernel: Asymmetric key parser 'x509' registered Jan 16 23:59:51.261309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 23:59:51.261330 kernel: io scheduler mq-deadline registered Jan 16 23:59:51.261350 kernel: io scheduler kyber registered Jan 16 23:59:51.261369 kernel: io scheduler bfq registered Jan 16 23:59:51.261629 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 16 23:59:51.261660 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 23:59:51.261680 kernel: ACPI: button: Power Button [PWRB] Jan 16 23:59:51.261700 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 16 23:59:51.261719 kernel: ACPI: button: Sleep Button [SLPB] Jan 16 23:59:51.261745 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 23:59:51.261765 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 23:59:51.263071 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 16 23:59:51.263114 kernel: printk: console [ttyS0] disabled Jan 16 23:59:51.263134 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 16 23:59:51.263154 kernel: printk: console [ttyS0] enabled Jan 16 23:59:51.263173 kernel: printk: bootconsole [uart0] disabled Jan 16 23:59:51.263192 kernel: thunder_xcv, ver 1.0 Jan 16 23:59:51.263211 kernel: thunder_bgx, ver 1.0 Jan 16 23:59:51.263255 kernel: nicpf, ver 1.0 Jan 16 23:59:51.263281 kernel: nicvf, ver 1.0 Jan 16 23:59:51.263519 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 23:59:51.263720 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T23:59:50 UTC (1768607990) Jan 16 23:59:51.263747 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 23:59:51.263767 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 16 23:59:51.263786 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 16 23:59:51.263805 kernel: watchdog: Hard watchdog permanently disabled Jan 16 23:59:51.263831 kernel: NET: Registered PF_INET6 protocol family Jan 16 23:59:51.263850 kernel: Segment Routing with IPv6 Jan 16 23:59:51.263869 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 23:59:51.263888 kernel: NET: Registered PF_PACKET protocol family Jan 16 23:59:51.263907 kernel: Key type dns_resolver registered Jan 16 23:59:51.263925 kernel: registered taskstats version 1 Jan 16 23:59:51.263944 kernel: Loading compiled-in X.509 certificates Jan 16 23:59:51.265007 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 16 23:59:51.265033 kernel: Key type .fscrypt registered Jan 16 23:59:51.265060 kernel: Key type fscrypt-provisioning registered Jan 16 23:59:51.265080 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 23:59:51.265100 kernel: ima: Allocated hash algorithm: sha1 Jan 16 23:59:51.265119 kernel: ima: No architecture policies found Jan 16 23:59:51.265138 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 23:59:51.265158 kernel: clk: Disabling unused clocks Jan 16 23:59:51.265179 kernel: Freeing unused kernel memory: 39424K Jan 16 23:59:51.265201 kernel: Run /init as init process Jan 16 23:59:51.265221 kernel: with arguments: Jan 16 23:59:51.265246 kernel: /init Jan 16 23:59:51.265266 kernel: with environment: Jan 16 23:59:51.265284 kernel: HOME=/ Jan 16 23:59:51.265303 kernel: TERM=linux Jan 16 23:59:51.265328 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:59:51.265353 systemd[1]: Detected virtualization amazon. Jan 16 23:59:51.265375 systemd[1]: Detected architecture arm64. Jan 16 23:59:51.265395 systemd[1]: Running in initrd. Jan 16 23:59:51.265421 systemd[1]: No hostname configured, using default hostname. Jan 16 23:59:51.265442 systemd[1]: Hostname set to . Jan 16 23:59:51.265463 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:59:51.265484 systemd[1]: Queued start job for default target initrd.target. Jan 16 23:59:51.265505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:59:51.265526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:59:51.265549 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 23:59:51.265571 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:59:51.265597 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 23:59:51.265620 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 23:59:51.265644 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 23:59:51.265667 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 23:59:51.265703 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:59:51.265746 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:59:51.265805 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:59:51.265854 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:59:51.265880 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:59:51.265903 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:59:51.265924 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:59:51.265946 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:59:51.268010 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:59:51.268044 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:59:51.268066 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:59:51.268096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:59:51.268117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:59:51.268138 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:59:51.268159 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 23:59:51.268180 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:59:51.268201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 23:59:51.268222 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 23:59:51.268243 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:59:51.268264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:59:51.268290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:51.268311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 23:59:51.268331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:59:51.268352 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 23:59:51.268424 systemd-journald[250]: Collecting audit messages is disabled. Jan 16 23:59:51.268476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:59:51.268497 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 23:59:51.268518 systemd-journald[250]: Journal started Jan 16 23:59:51.268560 systemd-journald[250]: Runtime Journal (/run/log/journal/ec22407b8c5c002a1a5452a5338a934a) is 8.0M, max 75.3M, 67.3M free. Jan 16 23:59:51.221187 systemd-modules-load[252]: Inserted module 'overlay' Jan 16 23:59:51.274021 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:59:51.280012 kernel: Bridge firewalling registered Jan 16 23:59:51.280555 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 16 23:59:51.286360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:51.292670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:59:51.298223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:59:51.313407 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:59:51.322080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:59:51.325780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:59:51.331739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:59:51.378373 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:59:51.383075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:59:51.389390 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:51.398344 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 23:59:51.406166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:59:51.438237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:59:51.459897 dracut-cmdline[286]: dracut-dracut-053 Jan 16 23:59:51.468627 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:59:51.524501 systemd-resolved[289]: Positive Trust Anchors: Jan 16 23:59:51.524536 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:59:51.524599 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:59:51.629979 kernel: SCSI subsystem initialized Jan 16 23:59:51.635994 kernel: Loading iSCSI transport class v2.0-870. Jan 16 23:59:51.648996 kernel: iscsi: registered transport (tcp) Jan 16 23:59:51.671496 kernel: iscsi: registered transport (qla4xxx) Jan 16 23:59:51.671569 kernel: QLogic iSCSI HBA Driver Jan 16 23:59:51.762000 kernel: random: crng init done Jan 16 23:59:51.762546 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 16 23:59:51.766869 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:59:51.771372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:59:51.797174 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 23:59:51.808273 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 23:59:51.852070 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 23:59:51.852147 kernel: device-mapper: uevent: version 1.0.3 Jan 16 23:59:51.852175 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 23:59:51.920003 kernel: raid6: neonx8 gen() 6703 MB/s Jan 16 23:59:51.936988 kernel: raid6: neonx4 gen() 6533 MB/s Jan 16 23:59:51.953992 kernel: raid6: neonx2 gen() 5447 MB/s Jan 16 23:59:51.970992 kernel: raid6: neonx1 gen() 3941 MB/s Jan 16 23:59:51.987992 kernel: raid6: int64x8 gen() 3795 MB/s Jan 16 23:59:52.004991 kernel: raid6: int64x4 gen() 3717 MB/s Jan 16 23:59:52.021991 kernel: raid6: int64x2 gen() 3596 MB/s Jan 16 23:59:52.040051 kernel: raid6: int64x1 gen() 2771 MB/s Jan 16 23:59:52.040093 kernel: raid6: using algorithm neonx8 gen() 6703 MB/s Jan 16 23:59:52.059031 kernel: raid6: .... xor() 4925 MB/s, rmw enabled Jan 16 23:59:52.059088 kernel: raid6: using neon recovery algorithm Jan 16 23:59:52.066993 kernel: xor: measuring software checksum speed Jan 16 23:59:52.069291 kernel: 8regs : 10264 MB/sec Jan 16 23:59:52.069323 kernel: 32regs : 11910 MB/sec Jan 16 23:59:52.070597 kernel: arm64_neon : 9566 MB/sec Jan 16 23:59:52.070640 kernel: xor: using function: 32regs (11910 MB/sec) Jan 16 23:59:52.155014 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 23:59:52.174444 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:59:52.190274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:59:52.227309 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 16 23:59:52.235315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:59:52.257265 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 23:59:52.297746 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Jan 16 23:59:52.354277 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:59:52.370339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:59:52.488630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:59:52.502262 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 23:59:52.545068 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 23:59:52.550349 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:59:52.550514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:59:52.567892 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:59:52.584330 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 23:59:52.630000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:59:52.692152 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 23:59:52.692224 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 16 23:59:52.697633 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 16 23:59:52.697990 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 16 23:59:52.697107 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:59:52.697354 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:52.701615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:59:52.704299 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:59:52.704571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:52.727424 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:7d:df:d6:e8:0d Jan 16 23:59:52.707575 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:52.725042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:52.740672 (udev-worker)[545]: Network interface NamePolicy= disabled on kernel command line. Jan 16 23:59:52.770751 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 23:59:52.770813 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 16 23:59:52.776810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:52.787431 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:59:52.796098 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 16 23:59:52.814673 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 23:59:52.814756 kernel: GPT:9289727 != 33554431 Jan 16 23:59:52.814784 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 23:59:52.815679 kernel: GPT:9289727 != 33554431 Jan 16 23:59:52.816845 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 23:59:52.817977 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:52.820935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:52.933999 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (519) Jan 16 23:59:52.956990 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (530) Jan 16 23:59:53.033193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 16 23:59:53.044457 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 16 23:59:53.044626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 16 23:59:53.057209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 16 23:59:53.079248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 16 23:59:53.093334 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 23:59:53.107726 disk-uuid[665]: Primary Header is updated. Jan 16 23:59:53.107726 disk-uuid[665]: Secondary Entries is updated. Jan 16 23:59:53.107726 disk-uuid[665]: Secondary Header is updated. Jan 16 23:59:53.119097 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:53.126067 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:54.142086 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:54.142192 disk-uuid[666]: The operation has completed successfully. Jan 16 23:59:54.327277 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 23:59:54.330129 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 23:59:54.379278 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 23:59:54.401825 sh[1010]: Success Jan 16 23:59:54.431264 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 16 23:59:54.531569 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 23:59:54.546203 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 23:59:54.552974 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 23:59:54.590105 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 16 23:59:54.590184 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:54.590213 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 23:59:54.592098 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 23:59:54.594693 kernel: BTRFS info (device dm-0): using free space tree Jan 16 23:59:54.675003 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 23:59:54.700640 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 23:59:54.701166 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 23:59:54.715384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 23:59:54.721679 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 23:59:54.748118 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:54.748191 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:54.749572 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:54.765004 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:54.785648 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 23:59:54.788717 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:54.801062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 23:59:54.812551 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 23:59:54.912049 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:59:54.927270 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:59:54.990643 systemd-networkd[1202]: lo: Link UP Jan 16 23:59:54.992562 systemd-networkd[1202]: lo: Gained carrier Jan 16 23:59:54.997091 systemd-networkd[1202]: Enumeration completed Jan 16 23:59:54.997410 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:59:55.002675 systemd[1]: Reached target network.target - Network. Jan 16 23:59:55.009065 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:55.009085 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:59:55.018920 systemd-networkd[1202]: eth0: Link UP Jan 16 23:59:55.018941 systemd-networkd[1202]: eth0: Gained carrier Jan 16 23:59:55.018979 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:55.043041 systemd-networkd[1202]: eth0: DHCPv4 address 172.31.23.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 16 23:59:55.271905 ignition[1122]: Ignition 2.19.0 Jan 16 23:59:55.271933 ignition[1122]: Stage: fetch-offline Jan 16 23:59:55.276168 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:55.276205 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:55.281543 ignition[1122]: Ignition finished successfully Jan 16 23:59:55.285486 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:59:55.301699 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 23:59:55.327806 ignition[1211]: Ignition 2.19.0 Jan 16 23:59:55.327826 ignition[1211]: Stage: fetch Jan 16 23:59:55.328449 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:55.328474 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:55.328626 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:55.369128 ignition[1211]: PUT result: OK Jan 16 23:59:55.374641 ignition[1211]: parsed url from cmdline: "" Jan 16 23:59:55.374666 ignition[1211]: no config URL provided Jan 16 23:59:55.374682 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:59:55.374708 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:59:55.374743 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:55.379139 ignition[1211]: PUT result: OK Jan 16 23:59:55.379358 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 16 23:59:55.383883 ignition[1211]: GET result: OK Jan 16 23:59:55.384355 ignition[1211]: parsing config with SHA512: 8f22d9e152e1781facd68d891d19b25500c01cb08153094cd715d94c0c827d46b9607643bdd7da810ac046a5e4804a1e8591d627fbbf100e89a5d843e00bfb4c Jan 16 23:59:55.399003 unknown[1211]: fetched base config from "system" Jan 16 23:59:55.401241 unknown[1211]: fetched base config from "system" Jan 16 23:59:55.401257 unknown[1211]: fetched user config from "aws" Jan 16 23:59:55.402142 ignition[1211]: fetch: fetch complete Jan 16 23:59:55.402155 ignition[1211]: fetch: fetch passed Jan 16 23:59:55.414155 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 23:59:55.402247 ignition[1211]: Ignition finished successfully Jan 16 23:59:55.429325 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 23:59:55.457550 ignition[1218]: Ignition 2.19.0 Jan 16 23:59:55.457579 ignition[1218]: Stage: kargs Jan 16 23:59:55.460889 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:55.460928 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:55.461386 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:55.465720 ignition[1218]: PUT result: OK Jan 16 23:59:55.472671 ignition[1218]: kargs: kargs passed Jan 16 23:59:55.473005 ignition[1218]: Ignition finished successfully Jan 16 23:59:55.481692 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 23:59:55.496899 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 23:59:55.519911 ignition[1224]: Ignition 2.19.0 Jan 16 23:59:55.519932 ignition[1224]: Stage: disks Jan 16 23:59:55.520587 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:55.520613 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:55.520764 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:55.523687 ignition[1224]: PUT result: OK Jan 16 23:59:55.532828 ignition[1224]: disks: disks passed Jan 16 23:59:55.532923 ignition[1224]: Ignition finished successfully Jan 16 23:59:55.541036 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 23:59:55.543854 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 23:59:55.548310 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:59:55.550701 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:59:55.558396 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:59:55.563008 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:59:55.577291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 23:59:55.616263 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 23:59:55.622749 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 23:59:55.636202 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 23:59:55.716001 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 16 23:59:55.717630 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 23:59:55.722295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 23:59:55.737134 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:59:55.744230 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 23:59:55.750588 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 16 23:59:55.750677 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 23:59:55.775544 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1251) Jan 16 23:59:55.775590 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:55.775618 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:55.776173 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:55.750728 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:59:55.790471 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 23:59:55.801123 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:55.801446 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 23:59:55.808584 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:59:56.145158 systemd-networkd[1202]: eth0: Gained IPv6LL Jan 16 23:59:56.186230 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 23:59:56.206603 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Jan 16 23:59:56.215867 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 23:59:56.225617 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 23:59:56.641355 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 23:59:56.654321 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 23:59:56.661187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 23:59:56.679541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 23:59:56.685316 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:56.729389 ignition[1363]: INFO : Ignition 2.19.0 Jan 16 23:59:56.729389 ignition[1363]: INFO : Stage: mount Jan 16 23:59:56.734403 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:56.734403 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:56.734403 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:56.734403 ignition[1363]: INFO : PUT result: OK Jan 16 23:59:56.730556 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 23:59:56.753567 ignition[1363]: INFO : mount: mount passed Jan 16 23:59:56.755635 ignition[1363]: INFO : Ignition finished successfully Jan 16 23:59:56.756224 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 23:59:56.773148 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 23:59:56.789248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:59:56.826646 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1375) Jan 16 23:59:56.826710 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:56.826738 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:56.829659 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:56.835001 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:56.838290 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:59:56.879996 ignition[1392]: INFO : Ignition 2.19.0 Jan 16 23:59:56.882105 ignition[1392]: INFO : Stage: files Jan 16 23:59:56.882105 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:56.882105 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:56.882105 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:56.892177 ignition[1392]: INFO : PUT result: OK Jan 16 23:59:56.897607 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Jan 16 23:59:56.900458 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 23:59:56.900458 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 23:59:56.920741 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 23:59:56.924886 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 23:59:56.924886 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 23:59:56.921610 unknown[1392]: wrote ssh authorized keys file for user: core Jan 16 23:59:56.933587 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 23:59:56.933587 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 23:59:56.933587 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:59:56.933587 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 16 23:59:57.020179 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 16 23:59:57.195747 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:59:57.195747 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 23:59:57.195747 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 16 23:59:57.273109 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 16 23:59:57.390101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:57.395693 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 16 23:59:57.697138 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 16 23:59:58.032738 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:58.032738 ignition[1392]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:59:58.040872 ignition[1392]: INFO : files: files passed Jan 16 23:59:58.040872 ignition[1392]: INFO : Ignition finished successfully Jan 16 23:59:58.087929 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 23:59:58.098410 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 23:59:58.112335 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 23:59:58.126552 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 23:59:58.126748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 23:59:58.145184 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:58.145184 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:58.154297 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:58.161057 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:59:58.164861 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 23:59:58.180341 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 23:59:58.234394 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 23:59:58.235427 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 23:59:58.243348 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 23:59:58.246007 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 23:59:58.248729 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 23:59:58.265365 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 23:59:58.292055 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:59:58.304349 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 23:59:58.333810 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:59:58.340139 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:59:58.343444 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 23:59:58.349734 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 23:59:58.349999 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:59:58.351173 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 23:59:58.351576 systemd[1]: Stopped target basic.target - Basic System. Jan 16 23:59:58.351971 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 23:59:58.352357 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:59:58.352753 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 23:59:58.353149 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 23:59:58.353503 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:59:58.353888 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 23:59:58.354615 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 23:59:58.355000 systemd[1]: Stopped target swap.target - Swaps. Jan 16 23:59:58.355901 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 23:59:58.360113 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:59:58.362702 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:59:58.363189 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:59:58.365198 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 23:59:58.381878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:59:58.382205 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 23:59:58.382494 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 23:59:58.391915 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 23:59:58.392568 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:59:58.397425 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 23:59:58.397730 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 23:59:58.442684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 23:59:58.458136 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 23:59:58.469624 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 23:59:58.470201 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:59:58.479754 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 23:59:58.480018 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:59:58.492932 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 23:59:58.493214 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 23:59:58.523228 ignition[1444]: INFO : Ignition 2.19.0 Jan 16 23:59:58.525419 ignition[1444]: INFO : Stage: umount Jan 16 23:59:58.527145 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:58.530211 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:58.530211 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:58.533131 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 23:59:58.539986 ignition[1444]: INFO : PUT result: OK Jan 16 23:59:58.547646 ignition[1444]: INFO : umount: umount passed Jan 16 23:59:58.553109 ignition[1444]: INFO : Ignition finished successfully Jan 16 23:59:58.549695 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 23:59:58.549890 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 23:59:58.558608 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 23:59:58.558814 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 23:59:58.563047 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 23:59:58.563223 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 23:59:58.566300 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 23:59:58.566394 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 23:59:58.569339 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 23:59:58.569422 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 23:59:58.573538 systemd[1]: Stopped target network.target - Network. Jan 16 23:59:58.577317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 23:59:58.578089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:59:58.582534 systemd[1]: Stopped target paths.target - Path Units. Jan 16 23:59:58.586168 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 23:59:58.590343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:59:58.598101 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 23:59:58.600392 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 23:59:58.602699 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 23:59:58.602781 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:59:58.605109 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 23:59:58.605182 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:59:58.607782 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 23:59:58.607867 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 23:59:58.614419 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 23:59:58.614497 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 23:59:58.617007 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 23:59:58.617090 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 23:59:58.620144 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 23:59:58.622493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 23:59:58.632192 systemd-networkd[1202]: eth0: DHCPv6 lease lost Jan 16 23:59:58.635067 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 23:59:58.635832 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 23:59:58.658164 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 23:59:58.658380 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 23:59:58.665403 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 23:59:58.665491 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:59:58.700339 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 23:59:58.714107 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 23:59:58.714246 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:59:58.719150 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 23:59:58.719275 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:59:58.722205 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 23:59:58.722287 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 23:59:58.725098 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 23:59:58.725182 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:59:58.728546 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:59:58.770591 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 23:59:58.775433 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:59:58.782398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 23:59:58.782550 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 23:59:58.788510 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 23:59:58.788592 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:59:58.791475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 23:59:58.791571 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:59:58.797065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 23:59:58.797159 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 23:59:58.817176 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:59:58.817285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:58.832276 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 23:59:58.834922 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 23:59:58.835066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:59:58.838315 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 23:59:58.838403 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:59:58.841541 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 23:59:58.841616 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:59:58.845722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:59:58.845801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:58.849352 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 23:59:58.849530 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 23:59:58.903637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 23:59:58.904047 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 23:59:58.912876 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 23:59:58.921254 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 23:59:58.974053 systemd[1]: Switching root. Jan 16 23:59:59.003966 systemd-journald[250]: Journal stopped Jan 17 00:00:01.570733 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 17 00:00:01.570854 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:00:01.570899 kernel: SELinux: policy capability open_perms=1 Jan 17 00:00:01.570929 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:00:01.570982 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:00:01.571040 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:00:01.571078 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:00:01.571110 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:00:01.571141 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:00:01.571171 kernel: audit: type=1403 audit(1768607999.772:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:00:01.571232 systemd[1]: Successfully loaded SELinux policy in 76.711ms. Jan 17 00:00:01.571287 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.872ms. Jan 17 00:00:01.571329 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:00:01.571360 systemd[1]: Detected virtualization amazon. Jan 17 00:00:01.571396 systemd[1]: Detected architecture arm64. Jan 17 00:00:01.571426 systemd[1]: Detected first boot. Jan 17 00:00:01.571459 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:00:01.571492 zram_generator::config[1505]: No configuration found. Jan 17 00:00:01.571527 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:00:01.571558 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:00:01.571590 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:00:01.571624 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:00:01.571657 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:00:01.571692 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:00:01.571726 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:00:01.571762 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:00:01.571792 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:00:01.571822 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:00:01.571852 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:00:01.571881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:01.571911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:01.571945 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:00:01.572043 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:00:01.572077 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:00:01.572108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:00:01.572141 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:00:01.572173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:01.572214 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:00:01.572248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:01.572281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:00:01.572316 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:00:01.572349 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:00:01.572381 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:00:01.572412 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:00:01.572444 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:00:01.572474 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:00:01.572505 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:01.572538 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:01.572571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:01.572605 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:00:01.572635 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:00:01.572668 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:00:01.572698 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:00:01.572727 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:00:01.572756 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:00:01.572786 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:00:01.572815 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:00:01.572849 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:01.572879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:00:01.572911 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:00:01.572966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:01.573026 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:00:01.573063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:01.573094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:00:01.573126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:01.573159 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:00:01.575657 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:00:01.575697 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:00:01.575729 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:00:01.575758 kernel: fuse: init (API version 7.39) Jan 17 00:00:01.575787 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:00:01.575817 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:00:01.575845 kernel: loop: module loaded Jan 17 00:00:01.575874 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:00:01.575912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:00:01.575943 kernel: ACPI: bus type drm_connector registered Jan 17 00:00:01.576005 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:00:01.576038 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:00:01.576067 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:00:01.576097 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:00:01.576129 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:00:01.576159 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:00:01.576188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:01.576218 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:00:01.576254 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:00:01.576283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:01.576313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:01.576342 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:00:01.576373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:00:01.576403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:01.576434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:01.576468 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:00:01.576498 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:00:01.576530 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:00:01.576562 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:01.576593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:01.576623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:01.576658 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:00:01.576688 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:00:01.576764 systemd-journald[1604]: Collecting audit messages is disabled. Jan 17 00:00:01.576826 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:00:01.576859 systemd-journald[1604]: Journal started Jan 17 00:00:01.576911 systemd-journald[1604]: Runtime Journal (/run/log/journal/ec22407b8c5c002a1a5452a5338a934a) is 8.0M, max 75.3M, 67.3M free. Jan 17 00:00:01.592034 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:00:01.609049 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:00:01.609139 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:00:01.635273 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:00:01.635366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:00:01.653541 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:00:01.653628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:00:01.663991 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:00:01.692092 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:00:01.698617 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:00:01.706608 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:00:01.709635 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:00:01.713087 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:00:01.752904 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:00:01.765387 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:00:01.803850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:01.824205 systemd-journald[1604]: Time spent on flushing to /var/log/journal/ec22407b8c5c002a1a5452a5338a934a is 93.569ms for 895 entries. Jan 17 00:00:01.824205 systemd-journald[1604]: System Journal (/var/log/journal/ec22407b8c5c002a1a5452a5338a934a) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:00:01.929388 systemd-journald[1604]: Received client request to flush runtime journal. Jan 17 00:00:01.864792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:01.883308 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:00:01.889922 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 17 00:00:01.889946 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 17 00:00:01.903118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:00:01.919454 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:00:01.936804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:00:01.951305 udevadm[1668]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:00:01.996893 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:00:02.010454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:00:02.042274 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jan 17 00:00:02.042825 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jan 17 00:00:02.055774 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:02.668019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:00:02.677407 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:02.742570 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Jan 17 00:00:02.785173 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:02.808346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:00:02.833243 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:00:02.937114 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:00:02.978674 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:00:02.978724 (udev-worker)[1691]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:00:03.105014 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1687) Jan 17 00:00:03.187827 systemd-networkd[1690]: lo: Link UP Jan 17 00:00:03.187853 systemd-networkd[1690]: lo: Gained carrier Jan 17 00:00:03.190590 systemd-networkd[1690]: Enumeration completed Jan 17 00:00:03.190814 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:00:03.195362 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:03.195369 systemd-networkd[1690]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:03.198662 systemd-networkd[1690]: eth0: Link UP Jan 17 00:00:03.198989 systemd-networkd[1690]: eth0: Gained carrier Jan 17 00:00:03.199025 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:03.207634 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:00:03.218191 systemd-networkd[1690]: eth0: DHCPv4 address 172.31.23.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:00:03.367594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:03.457985 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:00:03.477523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:00:03.488507 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:00:03.509633 lvm[1809]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:00:03.554068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:03.559385 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:00:03.562704 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:03.572420 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:00:03.587147 lvm[1816]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:00:03.626329 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:00:03.632713 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:00:03.635656 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:00:03.635714 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:00:03.638244 systemd[1]: Reached target machines.target - Containers. Jan 17 00:00:03.642668 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:00:03.651407 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:00:03.658312 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:00:03.663661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:03.668336 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:00:03.687234 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:00:03.709260 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:00:03.713925 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:00:03.729571 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:00:03.751833 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:00:03.759606 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:00:03.766540 kernel: loop0: detected capacity change from 0 to 207008 Jan 17 00:00:03.798027 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:00:03.840074 kernel: loop1: detected capacity change from 0 to 114432 Jan 17 00:00:03.949383 kernel: loop2: detected capacity change from 0 to 114328 Jan 17 00:00:04.021044 kernel: loop3: detected capacity change from 0 to 52536 Jan 17 00:00:04.112745 kernel: loop4: detected capacity change from 0 to 207008 Jan 17 00:00:04.151935 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 00:00:04.166044 kernel: loop6: detected capacity change from 0 to 114328 Jan 17 00:00:04.186017 kernel: loop7: detected capacity change from 0 to 52536 Jan 17 00:00:04.197910 (sd-merge)[1839]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:00:04.198973 (sd-merge)[1839]: Merged extensions into '/usr'. Jan 17 00:00:04.210088 systemd[1]: Reloading requested from client PID 1825 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:00:04.210120 systemd[1]: Reloading... Jan 17 00:00:04.340635 ldconfig[1821]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:00:04.339998 systemd-networkd[1690]: eth0: Gained IPv6LL Jan 17 00:00:04.369557 zram_generator::config[1867]: No configuration found. Jan 17 00:00:04.628863 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:04.787125 systemd[1]: Reloading finished in 576 ms. Jan 17 00:00:04.818321 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:00:04.822438 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:00:04.825800 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:00:04.846288 systemd[1]: Starting ensure-sysext.service... Jan 17 00:00:04.851300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:00:04.883167 systemd[1]: Reloading requested from client PID 1928 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:00:04.883211 systemd[1]: Reloading... Jan 17 00:00:04.911551 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:00:04.912251 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:00:04.918274 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:00:04.918896 systemd-tmpfiles[1929]: ACLs are not supported, ignoring. Jan 17 00:00:04.919090 systemd-tmpfiles[1929]: ACLs are not supported, ignoring. Jan 17 00:00:04.926488 systemd-tmpfiles[1929]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:00:04.926515 systemd-tmpfiles[1929]: Skipping /boot Jan 17 00:00:04.950104 systemd-tmpfiles[1929]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:00:04.950123 systemd-tmpfiles[1929]: Skipping /boot Jan 17 00:00:05.056996 zram_generator::config[1958]: No configuration found. Jan 17 00:00:05.292858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:05.455574 systemd[1]: Reloading finished in 571 ms. Jan 17 00:00:05.491234 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:05.513362 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:00:05.523339 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:00:05.542277 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:00:05.559298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:00:05.568853 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:00:05.608131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:05.618810 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:05.639215 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:05.660014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:05.663301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:05.688901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:05.689537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:05.709478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:05.722904 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:05.732237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:00:05.738019 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:00:05.744573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:05.748008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:05.767873 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:05.772378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:05.789404 augenrules[2045]: No rules Jan 17 00:00:05.802625 systemd[1]: Finished ensure-sysext.service. Jan 17 00:00:05.807278 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:00:05.820854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:05.827462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:00:05.830310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:05.830388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:00:05.830500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:00:05.830561 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:00:05.844266 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:00:05.849204 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:00:05.853837 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:00:05.867949 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:00:05.870407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:00:05.892684 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:00:05.893334 systemd-resolved[2022]: Positive Trust Anchors: Jan 17 00:00:05.893375 systemd-resolved[2022]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:00:05.893440 systemd-resolved[2022]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:00:05.910122 systemd-resolved[2022]: Defaulting to hostname 'linux'. Jan 17 00:00:05.913709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:00:05.916599 systemd[1]: Reached target network.target - Network. Jan 17 00:00:05.918851 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:00:05.921536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:05.924392 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:00:05.927122 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:00:05.930254 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:00:05.934109 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:00:05.936778 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:00:05.939716 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:00:05.942920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:00:05.943001 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:00:05.945229 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:00:05.948694 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:00:05.954475 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:00:05.959483 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:00:05.968064 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:00:05.970926 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:00:05.973629 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:00:05.978604 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:00:05.978714 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:00:05.978770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:00:05.990286 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:00:06.006167 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:00:06.013437 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:00:06.032088 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:00:06.040199 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:00:06.044149 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:00:06.061741 jq[2076]: false Jan 17 00:00:06.070928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:06.091340 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:00:06.099951 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:00:06.122293 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:00:06.131214 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:00:06.148003 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:00:06.171440 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:00:06.183534 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:00:06.193278 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:00:06.203673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:00:06.208504 extend-filesystems[2077]: Found loop4 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found loop5 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found loop6 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found loop7 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1p1 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1p2 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1p3 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found usr Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1p4 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1p6 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1p7 Jan 17 00:00:06.208504 extend-filesystems[2077]: Found nvme0n1p9 Jan 17 00:00:06.208504 extend-filesystems[2077]: Checking size of /dev/nvme0n1p9 Jan 17 00:00:06.213178 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:00:06.209807 dbus-daemon[2075]: [system] SELinux support is enabled Jan 17 00:00:06.243546 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:00:06.240431 dbus-daemon[2075]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1690 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:00:06.263357 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:00:06.313730 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:00:06.316351 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:00:06.378541 jq[2096]: true Jan 17 00:00:06.476072 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:00:06.348942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:00:06.476310 update_engine[2095]: I20260117 00:00:06.446753 2095 main.cc:92] Flatcar Update Engine starting Jan 17 00:00:06.476310 update_engine[2095]: I20260117 00:00:06.471354 2095 update_check_scheduler.cc:74] Next update check in 2m40s Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: ---------------------------------------------------- Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: corporation. Support and training for ntp-4 are Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: available at https://www.nwtime.org/support Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: ---------------------------------------------------- Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: proto: precision = 0.096 usec (-23) Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: basedate set to 2026-01-04 Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: gps base set to 2026-01-04 (week 2400) Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Listen normally on 3 eth0 172.31.23.5:123 Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Listen normally on 4 lo [::1]:123 Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Listen normally on 5 eth0 [fe80::47d:dfff:fed6:e80d%2]:123 Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: Listening on routing socket on fd #22 for interface updates Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:00:06.479771 ntpd[2083]: 17 Jan 00:00:06 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:00:06.500860 extend-filesystems[2077]: Resized partition /dev/nvme0n1p9 Jan 17 00:00:06.410917 ntpd[2083]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 17 00:00:06.476303 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:00:06.508182 extend-filesystems[2120]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:00:06.410986 ntpd[2083]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:00:06.480595 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:00:06.411009 ntpd[2083]: ---------------------------------------------------- Jan 17 00:00:06.483421 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:00:06.411028 ntpd[2083]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:00:06.411048 ntpd[2083]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:00:06.411066 ntpd[2083]: corporation. Support and training for ntp-4 are Jan 17 00:00:06.411086 ntpd[2083]: available at https://www.nwtime.org/support Jan 17 00:00:06.411104 ntpd[2083]: ---------------------------------------------------- Jan 17 00:00:06.416647 ntpd[2083]: proto: precision = 0.096 usec (-23) Jan 17 00:00:06.417626 ntpd[2083]: basedate set to 2026-01-04 Jan 17 00:00:06.417662 ntpd[2083]: gps base set to 2026-01-04 (week 2400) Jan 17 00:00:06.424278 ntpd[2083]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:00:06.424363 ntpd[2083]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:00:06.424642 ntpd[2083]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:00:06.424714 ntpd[2083]: Listen normally on 3 eth0 172.31.23.5:123 Jan 17 00:00:06.424790 ntpd[2083]: Listen normally on 4 lo [::1]:123 Jan 17 00:00:06.424868 ntpd[2083]: Listen normally on 5 eth0 [fe80::47d:dfff:fed6:e80d%2]:123 Jan 17 00:00:06.424928 ntpd[2083]: Listening on routing socket on fd #22 for interface updates Jan 17 00:00:06.438738 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:00:06.438800 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:00:06.574185 coreos-metadata[2073]: Jan 17 00:00:06.574 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:00:06.580235 dbus-daemon[2075]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:00:06.586712 (ntainerd)[2133]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:00:06.602058 coreos-metadata[2073]: Jan 17 00:00:06.589 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:00:06.602058 coreos-metadata[2073]: Jan 17 00:00:06.589 INFO Fetch successful Jan 17 00:00:06.602058 coreos-metadata[2073]: Jan 17 00:00:06.589 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:00:06.602058 coreos-metadata[2073]: Jan 17 00:00:06.601 INFO Fetch successful Jan 17 00:00:06.602058 coreos-metadata[2073]: Jan 17 00:00:06.601 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:00:06.587762 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:00:06.602461 tar[2113]: linux-arm64/LICENSE Jan 17 00:00:06.602461 tar[2113]: linux-arm64/helm Jan 17 00:00:06.594046 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:00:06.625569 coreos-metadata[2073]: Jan 17 00:00:06.620 INFO Fetch successful Jan 17 00:00:06.625569 coreos-metadata[2073]: Jan 17 00:00:06.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:00:06.625722 jq[2125]: true Jan 17 00:00:06.608248 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:00:06.614637 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:00:06.614690 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:00:06.629298 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:00:06.631721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:00:06.631758 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:00:06.638465 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:00:06.643627 coreos-metadata[2073]: Jan 17 00:00:06.642 INFO Fetch successful Jan 17 00:00:06.643627 coreos-metadata[2073]: Jan 17 00:00:06.642 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:00:06.661945 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:00:06.674846 coreos-metadata[2073]: Jan 17 00:00:06.674 INFO Fetch failed with 404: resource not found Jan 17 00:00:06.674846 coreos-metadata[2073]: Jan 17 00:00:06.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:00:06.676490 coreos-metadata[2073]: Jan 17 00:00:06.676 INFO Fetch successful Jan 17 00:00:06.676490 coreos-metadata[2073]: Jan 17 00:00:06.676 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:00:06.678697 coreos-metadata[2073]: Jan 17 00:00:06.678 INFO Fetch successful Jan 17 00:00:06.678697 coreos-metadata[2073]: Jan 17 00:00:06.678 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:00:06.685881 coreos-metadata[2073]: Jan 17 00:00:06.685 INFO Fetch successful Jan 17 00:00:06.685881 coreos-metadata[2073]: Jan 17 00:00:06.685 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:00:06.688301 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:00:06.690610 coreos-metadata[2073]: Jan 17 00:00:06.690 INFO Fetch successful Jan 17 00:00:06.705847 coreos-metadata[2073]: Jan 17 00:00:06.704 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:00:06.714162 coreos-metadata[2073]: Jan 17 00:00:06.714 INFO Fetch successful Jan 17 00:00:06.765349 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:00:06.820777 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:00:06.854147 extend-filesystems[2120]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:00:06.854147 extend-filesystems[2120]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:00:06.854147 extend-filesystems[2120]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:00:06.865285 extend-filesystems[2077]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:00:06.871279 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:00:06.871835 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:00:06.955091 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2166) Jan 17 00:00:06.984127 systemd-logind[2092]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 00:00:06.984176 systemd-logind[2092]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 00:00:06.984628 systemd-logind[2092]: New seat seat0. Jan 17 00:00:06.987513 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:00:06.995516 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:00:07.004057 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:00:07.021830 bash[2190]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:00:07.028918 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:00:07.037112 systemd[1]: Starting sshkeys.service... Jan 17 00:00:07.168687 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:00:07.177761 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:00:07.241722 amazon-ssm-agent[2162]: Initializing new seelog logger Jan 17 00:00:07.247218 amazon-ssm-agent[2162]: New Seelog Logger Creation Complete Jan 17 00:00:07.247218 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.247218 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.251848 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 processing appconfig overrides Jan 17 00:00:07.263985 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.263985 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.263985 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 processing appconfig overrides Jan 17 00:00:07.263985 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.263985 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.263985 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 processing appconfig overrides Jan 17 00:00:07.271426 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO Proxy environment variables: Jan 17 00:00:07.289559 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.289559 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:00:07.289712 amazon-ssm-agent[2162]: 2026/01/17 00:00:07 processing appconfig overrides Jan 17 00:00:07.382594 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO no_proxy: Jan 17 00:00:07.494082 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO https_proxy: Jan 17 00:00:07.494215 sshd_keygen[2112]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:00:07.593003 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO http_proxy: Jan 17 00:00:07.670191 containerd[2133]: time="2026-01-17T00:00:07.666202873Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:00:07.693836 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:00:07.710637 coreos-metadata[2227]: Jan 17 00:00:07.710 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:00:07.717101 coreos-metadata[2227]: Jan 17 00:00:07.715 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:00:07.721450 coreos-metadata[2227]: Jan 17 00:00:07.717 INFO Fetch successful Jan 17 00:00:07.721450 coreos-metadata[2227]: Jan 17 00:00:07.717 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:00:07.723802 coreos-metadata[2227]: Jan 17 00:00:07.723 INFO Fetch successful Jan 17 00:00:07.732082 unknown[2227]: wrote ssh authorized keys file for user: core Jan 17 00:00:07.795392 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:00:07.814980 update-ssh-keys[2315]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:00:07.823051 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:00:07.839542 systemd[1]: Finished sshkeys.service. Jan 17 00:00:07.848590 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:00:07.866778 locksmithd[2150]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:00:07.869484 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:00:07.883793 systemd[1]: Started sshd@0-172.31.23.5:22-68.220.241.50:42230.service - OpenSSH per-connection server daemon (68.220.241.50:42230). Jan 17 00:00:07.892649 dbus-daemon[2075]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:00:07.898363 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO Agent will take identity from EC2 Jan 17 00:00:07.901706 dbus-daemon[2075]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2149 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:00:07.907204 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:00:07.925190 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:00:07.961026 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:00:07.961645 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:00:07.979428 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:00:07.991556 polkitd[2332]: Started polkitd version 121 Jan 17 00:00:07.998307 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:00:07.999449 containerd[2133]: time="2026-01-17T00:00:07.999343370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:08.006936 containerd[2133]: time="2026-01-17T00:00:08.006859006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:08.006936 containerd[2133]: time="2026-01-17T00:00:08.006925378Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:00:08.007117 containerd[2133]: time="2026-01-17T00:00:08.006981166Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:00:08.007367 containerd[2133]: time="2026-01-17T00:00:08.007322170Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:00:08.007430 containerd[2133]: time="2026-01-17T00:00:08.007370830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:08.010995 containerd[2133]: time="2026-01-17T00:00:08.007501534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:08.010995 containerd[2133]: time="2026-01-17T00:00:08.007542166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:08.010995 containerd[2133]: time="2026-01-17T00:00:08.007909570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:08.012734 containerd[2133]: time="2026-01-17T00:00:08.007941886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:08.012875 containerd[2133]: time="2026-01-17T00:00:08.012750394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:08.012875 containerd[2133]: time="2026-01-17T00:00:08.012788902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:08.014005 containerd[2133]: time="2026-01-17T00:00:08.013040626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:08.014005 containerd[2133]: time="2026-01-17T00:00:08.013461382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:08.014005 containerd[2133]: time="2026-01-17T00:00:08.013746094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:08.014005 containerd[2133]: time="2026-01-17T00:00:08.013778122Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:00:08.014005 containerd[2133]: time="2026-01-17T00:00:08.013949806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:00:08.014296 containerd[2133]: time="2026-01-17T00:00:08.014084098Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:00:08.017523 polkitd[2332]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:00:08.017654 polkitd[2332]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:00:08.024388 polkitd[2332]: Finished loading, compiling and executing 2 rules Jan 17 00:00:08.027321 dbus-daemon[2075]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:00:08.027628 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.030084155Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.030190415Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.030229439Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.030264287Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.030306119Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.030577139Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031256423Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031491083Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031525943Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031559327Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031591067Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031622927Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031652879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.033089 containerd[2133]: time="2026-01-17T00:00:08.031684799Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.042117 containerd[2133]: time="2026-01-17T00:00:08.031716947Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.042117 containerd[2133]: time="2026-01-17T00:00:08.031746311Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.042117 containerd[2133]: time="2026-01-17T00:00:08.031778483Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.042117 containerd[2133]: time="2026-01-17T00:00:08.031808507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:00:08.042117 containerd[2133]: time="2026-01-17T00:00:08.031850639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.042117 containerd[2133]: time="2026-01-17T00:00:08.031883591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.042117 containerd[2133]: time="2026-01-17T00:00:08.031919903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.039532 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:00:08.038385 polkitd[2332]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046369331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046457303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046494827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046526003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046557263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046590647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046627643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046660919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046691315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046723115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046760879Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046807763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046844915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051209 containerd[2133]: time="2026-01-17T00:00:08.046878047Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047219303Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047260091Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047289011Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047318711Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047344427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047372255Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047395691Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:00:08.051862 containerd[2133]: time="2026-01-17T00:00:08.047421203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:00:08.053028 containerd[2133]: time="2026-01-17T00:00:08.050180687Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:00:08.053028 containerd[2133]: time="2026-01-17T00:00:08.050325671Z" level=info msg="Connect containerd service" Jan 17 00:00:08.053028 containerd[2133]: time="2026-01-17T00:00:08.050386139Z" level=info msg="using legacy CRI server" Jan 17 00:00:08.053028 containerd[2133]: time="2026-01-17T00:00:08.050404247Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:00:08.053028 containerd[2133]: time="2026-01-17T00:00:08.050549207Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:00:08.055605 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:00:08.067423 containerd[2133]: time="2026-01-17T00:00:08.067351643Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:00:08.068583 containerd[2133]: time="2026-01-17T00:00:08.067684655Z" level=info msg="Start subscribing containerd event" Jan 17 00:00:08.068583 containerd[2133]: time="2026-01-17T00:00:08.067778099Z" level=info msg="Start recovering state" Jan 17 00:00:08.068583 containerd[2133]: time="2026-01-17T00:00:08.067899755Z" level=info msg="Start event monitor" Jan 17 00:00:08.068583 containerd[2133]: time="2026-01-17T00:00:08.067924475Z" level=info msg="Start snapshots syncer" Jan 17 00:00:08.068583 containerd[2133]: time="2026-01-17T00:00:08.067944551Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:00:08.068583 containerd[2133]: time="2026-01-17T00:00:08.067993751Z" level=info msg="Start streaming server" Jan 17 00:00:08.069801 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:00:08.077154 containerd[2133]: time="2026-01-17T00:00:08.071106623Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:00:08.077154 containerd[2133]: time="2026-01-17T00:00:08.071266511Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:00:08.075456 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:00:08.089003 containerd[2133]: time="2026-01-17T00:00:08.085424243Z" level=info msg="containerd successfully booted in 0.427279s" Jan 17 00:00:08.086186 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:00:08.096932 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:00:08.127650 systemd-hostnamed[2149]: Hostname set to (transient) Jan 17 00:00:08.128700 systemd-resolved[2022]: System hostname changed to 'ip-172-31-23-5'. Jan 17 00:00:08.196458 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:00:08.295593 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:00:08.397104 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 00:00:08.460983 sshd[2331]: Accepted publickey for core from 68.220.241.50 port 42230 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:08.466773 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:08.494601 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:00:08.500078 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:00:08.507196 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:00:08.527050 systemd-logind[2092]: New session 1 of user core. Jan 17 00:00:08.554293 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:00:08.570660 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:00:08.596562 (systemd)[2359]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:00:08.603995 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:00:08.704170 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [Registrar] Starting registrar module Jan 17 00:00:08.805351 amazon-ssm-agent[2162]: 2026-01-17 00:00:07 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:00:08.895293 systemd[2359]: Queued start job for default target default.target. Jan 17 00:00:08.896478 systemd[2359]: Created slice app.slice - User Application Slice. Jan 17 00:00:08.896523 systemd[2359]: Reached target paths.target - Paths. Jan 17 00:00:08.896554 systemd[2359]: Reached target timers.target - Timers. Jan 17 00:00:08.905127 systemd[2359]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:00:08.948168 systemd[2359]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:00:08.948281 systemd[2359]: Reached target sockets.target - Sockets. Jan 17 00:00:08.948313 systemd[2359]: Reached target basic.target - Basic System. Jan 17 00:00:08.948413 systemd[2359]: Reached target default.target - Main User Target. Jan 17 00:00:08.948475 systemd[2359]: Startup finished in 337ms. Jan 17 00:00:08.948571 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:00:08.961662 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:00:09.307851 tar[2113]: linux-arm64/README.md Jan 17 00:00:09.362317 systemd[1]: Started sshd@1-172.31.23.5:22-68.220.241.50:42240.service - OpenSSH per-connection server daemon (68.220.241.50:42240). Jan 17 00:00:09.366481 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:00:09.551311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:09.554920 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:00:09.563348 systemd[1]: Startup finished in 10.108s (kernel) + 9.868s (userspace) = 19.977s. Jan 17 00:00:09.575118 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:09.751589 amazon-ssm-agent[2162]: 2026-01-17 00:00:09 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:00:09.790589 amazon-ssm-agent[2162]: 2026-01-17 00:00:09 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:00:09.790589 amazon-ssm-agent[2162]: 2026-01-17 00:00:09 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:00:09.790589 amazon-ssm-agent[2162]: 2026-01-17 00:00:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:00:09.852034 amazon-ssm-agent[2162]: 2026-01-17 00:00:09 INFO [CredentialRefresher] Next credential rotation will be in 31.54165763296667 minutes Jan 17 00:00:09.957019 sshd[2375]: Accepted publickey for core from 68.220.241.50 port 42240 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:09.959256 sshd[2375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:09.972688 systemd-logind[2092]: New session 2 of user core. Jan 17 00:00:09.979518 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:00:10.342915 sshd[2375]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:10.350409 systemd[1]: sshd@1-172.31.23.5:22-68.220.241.50:42240.service: Deactivated successfully. Jan 17 00:00:10.362422 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:00:10.367049 systemd-logind[2092]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:00:10.371034 systemd-logind[2092]: Removed session 2. Jan 17 00:00:10.428274 systemd[1]: Started sshd@2-172.31.23.5:22-68.220.241.50:42248.service - OpenSSH per-connection server daemon (68.220.241.50:42248). Jan 17 00:00:10.454067 kubelet[2386]: E0117 00:00:10.453948 2386 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:10.457569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:10.457944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:00:10.819344 amazon-ssm-agent[2162]: 2026-01-17 00:00:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:00:10.920158 amazon-ssm-agent[2162]: 2026-01-17 00:00:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2407) started Jan 17 00:00:10.942010 sshd[2402]: Accepted publickey for core from 68.220.241.50 port 42248 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:10.944788 sshd[2402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:10.957071 systemd-logind[2092]: New session 3 of user core. Jan 17 00:00:10.966825 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:00:11.020510 amazon-ssm-agent[2162]: 2026-01-17 00:00:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:00:11.294601 sshd[2402]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:11.299849 systemd-logind[2092]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:00:11.304200 systemd[1]: sshd@2-172.31.23.5:22-68.220.241.50:42248.service: Deactivated successfully. Jan 17 00:00:11.309716 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:00:11.311634 systemd-logind[2092]: Removed session 3. Jan 17 00:00:11.380458 systemd[1]: Started sshd@3-172.31.23.5:22-68.220.241.50:42258.service - OpenSSH per-connection server daemon (68.220.241.50:42258). Jan 17 00:00:11.879166 sshd[2422]: Accepted publickey for core from 68.220.241.50 port 42258 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:11.887934 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:11.897680 systemd-logind[2092]: New session 4 of user core. Jan 17 00:00:11.905617 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:00:12.234304 sshd[2422]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:12.241095 systemd-logind[2092]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:00:12.242601 systemd[1]: sshd@3-172.31.23.5:22-68.220.241.50:42258.service: Deactivated successfully. Jan 17 00:00:12.247862 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:00:12.250453 systemd-logind[2092]: Removed session 4. Jan 17 00:00:12.333452 systemd[1]: Started sshd@4-172.31.23.5:22-68.220.241.50:42262.service - OpenSSH per-connection server daemon (68.220.241.50:42262). Jan 17 00:00:12.868618 sshd[2430]: Accepted publickey for core from 68.220.241.50 port 42262 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:12.871247 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:12.878713 systemd-logind[2092]: New session 5 of user core. Jan 17 00:00:12.890517 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:00:13.185123 sudo[2434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:00:13.185782 sudo[2434]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:13.204661 sudo[2434]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:13.289732 sshd[2430]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:13.295466 systemd[1]: sshd@4-172.31.23.5:22-68.220.241.50:42262.service: Deactivated successfully. Jan 17 00:00:13.300635 systemd-logind[2092]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:00:13.305156 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:00:13.306451 systemd-logind[2092]: Removed session 5. Jan 17 00:00:13.368457 systemd[1]: Started sshd@5-172.31.23.5:22-68.220.241.50:52512.service - OpenSSH per-connection server daemon (68.220.241.50:52512). Jan 17 00:00:13.125293 systemd-resolved[2022]: Clock change detected. Flushing caches. Jan 17 00:00:13.133614 systemd-journald[1604]: Time jumped backwards, rotating. Jan 17 00:00:13.582086 sshd[2439]: Accepted publickey for core from 68.220.241.50 port 52512 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:13.584675 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:13.593871 systemd-logind[2092]: New session 6 of user core. Jan 17 00:00:13.600570 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:00:13.860406 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:00:13.861081 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:13.867586 sudo[2445]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:13.877784 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:00:13.878467 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:13.898497 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:00:13.915144 auditctl[2448]: No rules Jan 17 00:00:13.915931 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:00:13.916494 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:00:13.927726 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:00:13.981653 augenrules[2467]: No rules Jan 17 00:00:13.984480 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:00:13.989404 sudo[2444]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:14.066698 sshd[2439]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:14.072681 systemd[1]: sshd@5-172.31.23.5:22-68.220.241.50:52512.service: Deactivated successfully. Jan 17 00:00:14.073100 systemd-logind[2092]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:00:14.080375 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:00:14.082326 systemd-logind[2092]: Removed session 6. Jan 17 00:00:14.153501 systemd[1]: Started sshd@6-172.31.23.5:22-68.220.241.50:52526.service - OpenSSH per-connection server daemon (68.220.241.50:52526). Jan 17 00:00:14.644067 sshd[2476]: Accepted publickey for core from 68.220.241.50 port 52526 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:14.646532 sshd[2476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:14.655027 systemd-logind[2092]: New session 7 of user core. Jan 17 00:00:14.664520 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:00:14.921770 sudo[2480]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:00:14.922427 sudo[2480]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:15.419490 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:00:15.427689 (dockerd)[2496]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:00:15.844067 dockerd[2496]: time="2026-01-17T00:00:15.843817561Z" level=info msg="Starting up" Jan 17 00:00:16.203683 dockerd[2496]: time="2026-01-17T00:00:16.203527187Z" level=info msg="Loading containers: start." Jan 17 00:00:16.377040 kernel: Initializing XFRM netlink socket Jan 17 00:00:16.409691 (udev-worker)[2518]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:00:16.497737 systemd-networkd[1690]: docker0: Link UP Jan 17 00:00:16.532501 dockerd[2496]: time="2026-01-17T00:00:16.532429080Z" level=info msg="Loading containers: done." Jan 17 00:00:16.579441 dockerd[2496]: time="2026-01-17T00:00:16.578690221Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:00:16.579441 dockerd[2496]: time="2026-01-17T00:00:16.578840965Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:00:16.579441 dockerd[2496]: time="2026-01-17T00:00:16.579053521Z" level=info msg="Daemon has completed initialization" Jan 17 00:00:16.645976 dockerd[2496]: time="2026-01-17T00:00:16.645685129Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:00:16.646185 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:00:16.967781 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4158819888-merged.mount: Deactivated successfully. Jan 17 00:00:17.757360 containerd[2133]: time="2026-01-17T00:00:17.756848223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:00:18.398212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070176394.mount: Deactivated successfully. Jan 17 00:00:19.887437 containerd[2133]: time="2026-01-17T00:00:19.887376809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:19.890346 containerd[2133]: time="2026-01-17T00:00:19.890271065Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 17 00:00:19.892562 containerd[2133]: time="2026-01-17T00:00:19.892487297Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:19.899409 containerd[2133]: time="2026-01-17T00:00:19.898479065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:19.901984 containerd[2133]: time="2026-01-17T00:00:19.901911305Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.14496593s" Jan 17 00:00:19.902245 containerd[2133]: time="2026-01-17T00:00:19.902209121Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 17 00:00:19.904371 containerd[2133]: time="2026-01-17T00:00:19.904328441Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:00:20.326252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:00:20.333337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:20.690409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:20.707591 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:20.797093 kubelet[2707]: E0117 00:00:20.796965 2707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:20.804869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:20.806534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:00:21.540551 containerd[2133]: time="2026-01-17T00:00:21.540454277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:21.543578 containerd[2133]: time="2026-01-17T00:00:21.543514121Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 17 00:00:21.546587 containerd[2133]: time="2026-01-17T00:00:21.546521633Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:21.551974 containerd[2133]: time="2026-01-17T00:00:21.551852261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:21.554472 containerd[2133]: time="2026-01-17T00:00:21.554413421Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.6498556s" Jan 17 00:00:21.554820 containerd[2133]: time="2026-01-17T00:00:21.554635733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 17 00:00:21.555722 containerd[2133]: time="2026-01-17T00:00:21.555602573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:00:23.009517 containerd[2133]: time="2026-01-17T00:00:23.009459305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:23.012294 containerd[2133]: time="2026-01-17T00:00:23.012221081Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 17 00:00:23.014228 containerd[2133]: time="2026-01-17T00:00:23.014173013Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:23.021048 containerd[2133]: time="2026-01-17T00:00:23.020670425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:23.023201 containerd[2133]: time="2026-01-17T00:00:23.022959857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.467276296s" Jan 17 00:00:23.023201 containerd[2133]: time="2026-01-17T00:00:23.023047589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 17 00:00:23.025058 containerd[2133]: time="2026-01-17T00:00:23.024342461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:00:24.294579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809398301.mount: Deactivated successfully. Jan 17 00:00:24.856814 containerd[2133]: time="2026-01-17T00:00:24.856757794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:24.860316 containerd[2133]: time="2026-01-17T00:00:24.860266342Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 17 00:00:24.863285 containerd[2133]: time="2026-01-17T00:00:24.863232574Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:24.874033 containerd[2133]: time="2026-01-17T00:00:24.873094810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:24.876800 containerd[2133]: time="2026-01-17T00:00:24.876745006Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.852306461s" Jan 17 00:00:24.876955 containerd[2133]: time="2026-01-17T00:00:24.876925642Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 17 00:00:24.877856 containerd[2133]: time="2026-01-17T00:00:24.877818538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:00:25.475605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4030941673.mount: Deactivated successfully. Jan 17 00:00:26.810577 containerd[2133]: time="2026-01-17T00:00:26.810514571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:26.813105 containerd[2133]: time="2026-01-17T00:00:26.813041255Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 17 00:00:26.815169 containerd[2133]: time="2026-01-17T00:00:26.815123003Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:26.822535 containerd[2133]: time="2026-01-17T00:00:26.822485652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:26.824938 containerd[2133]: time="2026-01-17T00:00:26.824888580Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.94682343s" Jan 17 00:00:26.825131 containerd[2133]: time="2026-01-17T00:00:26.825100368Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 17 00:00:26.825800 containerd[2133]: time="2026-01-17T00:00:26.825763092Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:00:27.373694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677407721.mount: Deactivated successfully. Jan 17 00:00:27.385406 containerd[2133]: time="2026-01-17T00:00:27.385345258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:27.387263 containerd[2133]: time="2026-01-17T00:00:27.387222082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 17 00:00:27.391033 containerd[2133]: time="2026-01-17T00:00:27.389520238Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:27.396176 containerd[2133]: time="2026-01-17T00:00:27.396101866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:27.397796 containerd[2133]: time="2026-01-17T00:00:27.397737718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 571.812566ms" Jan 17 00:00:27.397897 containerd[2133]: time="2026-01-17T00:00:27.397794034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 00:00:27.398471 containerd[2133]: time="2026-01-17T00:00:27.398432542Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:00:27.975322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658574588.mount: Deactivated successfully. Jan 17 00:00:30.825960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:00:30.836662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:31.230402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:31.240775 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:31.340498 kubelet[2853]: E0117 00:00:31.340317 2853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:31.346137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:31.346620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:00:31.559403 containerd[2133]: time="2026-01-17T00:00:31.559263147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:31.562427 containerd[2133]: time="2026-01-17T00:00:31.562356051Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 17 00:00:31.564025 containerd[2133]: time="2026-01-17T00:00:31.563910135Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:31.570674 containerd[2133]: time="2026-01-17T00:00:31.570582951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:31.573403 containerd[2133]: time="2026-01-17T00:00:31.573350799Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.174270473s" Jan 17 00:00:31.573720 containerd[2133]: time="2026-01-17T00:00:31.573558171Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 17 00:00:37.877363 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:00:39.877522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:39.885506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:39.945843 systemd[1]: Reloading requested from client PID 2895 ('systemctl') (unit session-7.scope)... Jan 17 00:00:39.946090 systemd[1]: Reloading... Jan 17 00:00:40.175043 zram_generator::config[2938]: No configuration found. Jan 17 00:00:40.436699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:40.605838 systemd[1]: Reloading finished in 658 ms. Jan 17 00:00:40.708244 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:40.717433 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:00:40.717966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:40.725524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:41.052439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:41.070712 (kubelet)[3013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:00:41.146592 kubelet[3013]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:00:41.148086 kubelet[3013]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:00:41.148086 kubelet[3013]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:00:41.148086 kubelet[3013]: I0117 00:00:41.147293 3013 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:00:43.505343 kubelet[3013]: I0117 00:00:43.505283 3013 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:00:43.505343 kubelet[3013]: I0117 00:00:43.505336 3013 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:00:43.506102 kubelet[3013]: I0117 00:00:43.505802 3013 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:00:43.555271 kubelet[3013]: E0117 00:00:43.555210 3013 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:43.560060 kubelet[3013]: I0117 00:00:43.558682 3013 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:00:43.572097 kubelet[3013]: E0117 00:00:43.572037 3013 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:00:43.572284 kubelet[3013]: I0117 00:00:43.572261 3013 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:00:43.577802 kubelet[3013]: I0117 00:00:43.577765 3013 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:00:43.582156 kubelet[3013]: I0117 00:00:43.582090 3013 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:00:43.582706 kubelet[3013]: I0117 00:00:43.582399 3013 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:00:43.583134 kubelet[3013]: I0117 00:00:43.583112 3013 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:00:43.583226 kubelet[3013]: I0117 00:00:43.583210 3013 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:00:43.583664 kubelet[3013]: I0117 00:00:43.583643 3013 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:00:43.590980 kubelet[3013]: I0117 00:00:43.590943 3013 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:00:43.591150 kubelet[3013]: I0117 00:00:43.591130 3013 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:00:43.591280 kubelet[3013]: I0117 00:00:43.591263 3013 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:00:43.591374 kubelet[3013]: I0117 00:00:43.591356 3013 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:00:43.595112 kubelet[3013]: W0117 00:00:43.594986 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-5&limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:43.595253 kubelet[3013]: E0117 00:00:43.595126 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-5&limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:43.596307 kubelet[3013]: W0117 00:00:43.596243 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:43.596496 kubelet[3013]: E0117 00:00:43.596465 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:43.596716 kubelet[3013]: I0117 00:00:43.596689 3013 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:00:43.598672 kubelet[3013]: I0117 00:00:43.598633 3013 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:00:43.599069 kubelet[3013]: W0117 00:00:43.599040 3013 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:00:43.601499 kubelet[3013]: I0117 00:00:43.601465 3013 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:00:43.601708 kubelet[3013]: I0117 00:00:43.601688 3013 server.go:1287] "Started kubelet" Jan 17 00:00:43.611034 kubelet[3013]: E0117 00:00:43.610523 3013 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.5:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-5.188b5ba529c67d4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-5,UID:ip-172-31-23-5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-5,},FirstTimestamp:2026-01-17 00:00:43.601657167 +0000 UTC m=+2.524989050,LastTimestamp:2026-01-17 00:00:43.601657167 +0000 UTC m=+2.524989050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-5,}" Jan 17 00:00:43.614309 kubelet[3013]: I0117 00:00:43.612512 3013 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:00:43.614309 kubelet[3013]: I0117 00:00:43.612972 3013 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:00:43.614532 kubelet[3013]: I0117 00:00:43.612997 3013 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:00:43.624720 kubelet[3013]: I0117 00:00:43.613116 3013 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:00:43.626412 kubelet[3013]: I0117 00:00:43.626353 3013 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:00:43.628303 kubelet[3013]: I0117 00:00:43.628073 3013 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:00:43.628678 kubelet[3013]: E0117 00:00:43.628633 3013 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-5\" not found" Jan 17 00:00:43.629111 kubelet[3013]: I0117 00:00:43.613857 3013 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:00:43.629292 kubelet[3013]: I0117 00:00:43.629256 3013 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:00:43.629374 kubelet[3013]: I0117 00:00:43.629363 3013 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:00:43.630405 kubelet[3013]: W0117 00:00:43.630317 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:43.630533 kubelet[3013]: E0117 00:00:43.630413 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:43.630593 kubelet[3013]: E0117 00:00:43.630536 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-5?timeout=10s\": dial tcp 172.31.23.5:6443: connect: connection refused" interval="200ms" Jan 17 00:00:43.632383 kubelet[3013]: I0117 00:00:43.632327 3013 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:00:43.632536 kubelet[3013]: I0117 00:00:43.632509 3013 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:00:43.635492 kubelet[3013]: I0117 00:00:43.635392 3013 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:00:43.645382 kubelet[3013]: E0117 00:00:43.645203 3013 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:00:43.663576 kubelet[3013]: I0117 00:00:43.663272 3013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:00:43.666083 kubelet[3013]: I0117 00:00:43.665557 3013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:00:43.666083 kubelet[3013]: I0117 00:00:43.665603 3013 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:00:43.666083 kubelet[3013]: I0117 00:00:43.665640 3013 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:00:43.666083 kubelet[3013]: I0117 00:00:43.665656 3013 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:00:43.666083 kubelet[3013]: E0117 00:00:43.665754 3013 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:00:43.701258 kubelet[3013]: W0117 00:00:43.700400 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:43.701258 kubelet[3013]: E0117 00:00:43.701198 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:43.704116 kubelet[3013]: I0117 00:00:43.704075 3013 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:00:43.704116 kubelet[3013]: I0117 00:00:43.704109 3013 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:00:43.704318 kubelet[3013]: I0117 00:00:43.704143 3013 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:00:43.708900 kubelet[3013]: I0117 00:00:43.708857 3013 policy_none.go:49] "None policy: Start" Jan 17 00:00:43.708900 kubelet[3013]: I0117 00:00:43.708902 3013 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:00:43.709077 kubelet[3013]: I0117 00:00:43.708927 3013 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:00:43.719766 kubelet[3013]: I0117 00:00:43.719618 3013 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:00:43.720113 kubelet[3013]: I0117 00:00:43.719939 3013 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:00:43.720113 kubelet[3013]: I0117 00:00:43.719974 3013 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:00:43.723398 kubelet[3013]: I0117 00:00:43.723362 3013 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:00:43.726508 kubelet[3013]: E0117 00:00:43.726405 3013 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:00:43.726508 kubelet[3013]: E0117 00:00:43.726470 3013 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-5\" not found" Jan 17 00:00:43.778272 kubelet[3013]: E0117 00:00:43.778122 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:43.784890 kubelet[3013]: E0117 00:00:43.784120 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:43.785791 kubelet[3013]: E0117 00:00:43.785743 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:43.827094 kubelet[3013]: I0117 00:00:43.827055 3013 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-5" Jan 17 00:00:43.827963 kubelet[3013]: E0117 00:00:43.827923 3013 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.5:6443/api/v1/nodes\": dial tcp 172.31.23.5:6443: connect: connection refused" node="ip-172-31-23-5" Jan 17 00:00:43.831630 kubelet[3013]: E0117 00:00:43.831586 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-5?timeout=10s\": dial tcp 172.31.23.5:6443: connect: connection refused" interval="400ms" Jan 17 00:00:43.931180 kubelet[3013]: I0117 00:00:43.931114 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:43.931312 kubelet[3013]: I0117 00:00:43.931183 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:43.931312 kubelet[3013]: I0117 00:00:43.931244 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4c63bdc501063239dd843702ee94ebc-ca-certs\") pod \"kube-apiserver-ip-172-31-23-5\" (UID: \"e4c63bdc501063239dd843702ee94ebc\") " pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:43.931312 kubelet[3013]: I0117 00:00:43.931280 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4c63bdc501063239dd843702ee94ebc-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-5\" (UID: \"e4c63bdc501063239dd843702ee94ebc\") " pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:43.931471 kubelet[3013]: I0117 00:00:43.931315 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:43.931471 kubelet[3013]: I0117 00:00:43.931354 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:43.931471 kubelet[3013]: I0117 00:00:43.931397 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:43.931471 kubelet[3013]: I0117 00:00:43.931436 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3e4d5ab6f4a2cf637b6cad828c20a8c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-5\" (UID: \"f3e4d5ab6f4a2cf637b6cad828c20a8c\") " pod="kube-system/kube-scheduler-ip-172-31-23-5" Jan 17 00:00:43.931666 kubelet[3013]: I0117 00:00:43.931476 3013 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4c63bdc501063239dd843702ee94ebc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-5\" (UID: \"e4c63bdc501063239dd843702ee94ebc\") " pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:44.031140 kubelet[3013]: I0117 00:00:44.030872 3013 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-5" Jan 17 00:00:44.033054 kubelet[3013]: E0117 00:00:44.032431 3013 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.5:6443/api/v1/nodes\": dial tcp 172.31.23.5:6443: connect: connection refused" node="ip-172-31-23-5" Jan 17 00:00:44.085129 containerd[2133]: time="2026-01-17T00:00:44.084623221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-5,Uid:e4c63bdc501063239dd843702ee94ebc,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:44.086610 containerd[2133]: time="2026-01-17T00:00:44.086358061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-5,Uid:2a9167220dd3ffa31310f2f52de88523,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:44.087181 containerd[2133]: time="2026-01-17T00:00:44.086906161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-5,Uid:f3e4d5ab6f4a2cf637b6cad828c20a8c,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:44.233160 kubelet[3013]: E0117 00:00:44.233094 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-5?timeout=10s\": dial tcp 172.31.23.5:6443: connect: connection refused" interval="800ms" Jan 17 00:00:44.414440 kubelet[3013]: W0117 00:00:44.414240 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:44.414440 kubelet[3013]: E0117 00:00:44.414349 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:44.434906 kubelet[3013]: I0117 00:00:44.434820 3013 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-5" Jan 17 00:00:44.435689 kubelet[3013]: E0117 00:00:44.435365 3013 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.5:6443/api/v1/nodes\": dial tcp 172.31.23.5:6443: connect: connection refused" node="ip-172-31-23-5" Jan 17 00:00:44.613499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713512754.mount: Deactivated successfully. Jan 17 00:00:44.625971 containerd[2133]: time="2026-01-17T00:00:44.625886308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:00:44.628124 containerd[2133]: time="2026-01-17T00:00:44.628053220Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:00:44.630162 containerd[2133]: time="2026-01-17T00:00:44.629946508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:00:44.632182 containerd[2133]: time="2026-01-17T00:00:44.632127964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:00:44.634281 containerd[2133]: time="2026-01-17T00:00:44.634226452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:00:44.637290 containerd[2133]: time="2026-01-17T00:00:44.637120192Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:00:44.638694 containerd[2133]: time="2026-01-17T00:00:44.638593348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:00:44.643188 containerd[2133]: time="2026-01-17T00:00:44.643130008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:00:44.647616 containerd[2133]: time="2026-01-17T00:00:44.647237620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.245359ms" Jan 17 00:00:44.651529 containerd[2133]: time="2026-01-17T00:00:44.651450220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.979863ms" Jan 17 00:00:44.655759 containerd[2133]: time="2026-01-17T00:00:44.655668460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.931131ms" Jan 17 00:00:44.730496 kubelet[3013]: W0117 00:00:44.729367 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:44.730496 kubelet[3013]: E0117 00:00:44.730039 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:44.873788 kubelet[3013]: W0117 00:00:44.873630 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-5&limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:44.873788 kubelet[3013]: E0117 00:00:44.873726 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-5&limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:44.890036 containerd[2133]: time="2026-01-17T00:00:44.889665641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:44.890036 containerd[2133]: time="2026-01-17T00:00:44.889774673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:44.890036 containerd[2133]: time="2026-01-17T00:00:44.889830485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:44.890867 containerd[2133]: time="2026-01-17T00:00:44.890396153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:44.897088 containerd[2133]: time="2026-01-17T00:00:44.896761577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:44.897088 containerd[2133]: time="2026-01-17T00:00:44.896823605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:44.897088 containerd[2133]: time="2026-01-17T00:00:44.896908517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:44.897088 containerd[2133]: time="2026-01-17T00:00:44.896630273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:44.897088 containerd[2133]: time="2026-01-17T00:00:44.896729285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:44.897088 containerd[2133]: time="2026-01-17T00:00:44.896758661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:44.897088 containerd[2133]: time="2026-01-17T00:00:44.896929109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:44.897947 containerd[2133]: time="2026-01-17T00:00:44.897757625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:45.034686 kubelet[3013]: E0117 00:00:45.034486 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-5?timeout=10s\": dial tcp 172.31.23.5:6443: connect: connection refused" interval="1.6s" Jan 17 00:00:45.045663 containerd[2133]: time="2026-01-17T00:00:45.045609566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-5,Uid:e4c63bdc501063239dd843702ee94ebc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a517658ea30741076678ea442cd6262b52eacad5b8290b92a53ab577e36d6ac\"" Jan 17 00:00:45.052023 containerd[2133]: time="2026-01-17T00:00:45.051648602Z" level=info msg="CreateContainer within sandbox \"1a517658ea30741076678ea442cd6262b52eacad5b8290b92a53ab577e36d6ac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:00:45.072778 containerd[2133]: time="2026-01-17T00:00:45.072703466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-5,Uid:2a9167220dd3ffa31310f2f52de88523,Namespace:kube-system,Attempt:0,} returns sandbox id \"a10b84aba8c8edf4da029f7aa39f195f44813f21a34a6def94d63fb60ea31175\"" Jan 17 00:00:45.076777 containerd[2133]: time="2026-01-17T00:00:45.075931430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-5,Uid:f3e4d5ab6f4a2cf637b6cad828c20a8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"89ab74e0c78a69a9e92d4719512294fb5b58fe0c2f78ec6e9f159549e7c8f9f9\"" Jan 17 00:00:45.079551 containerd[2133]: time="2026-01-17T00:00:45.079480742Z" level=info msg="CreateContainer within sandbox \"a10b84aba8c8edf4da029f7aa39f195f44813f21a34a6def94d63fb60ea31175\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:00:45.085765 containerd[2133]: time="2026-01-17T00:00:45.085711838Z" level=info msg="CreateContainer within sandbox \"89ab74e0c78a69a9e92d4719512294fb5b58fe0c2f78ec6e9f159549e7c8f9f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:00:45.094981 containerd[2133]: time="2026-01-17T00:00:45.094902398Z" level=info msg="CreateContainer within sandbox \"1a517658ea30741076678ea442cd6262b52eacad5b8290b92a53ab577e36d6ac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"04b26166b305141181815827f66250e8e3cba276f9e69e3a018561decb3caecd\"" Jan 17 00:00:45.096732 containerd[2133]: time="2026-01-17T00:00:45.096339374Z" level=info msg="StartContainer for \"04b26166b305141181815827f66250e8e3cba276f9e69e3a018561decb3caecd\"" Jan 17 00:00:45.130234 containerd[2133]: time="2026-01-17T00:00:45.130116146Z" level=info msg="CreateContainer within sandbox \"89ab74e0c78a69a9e92d4719512294fb5b58fe0c2f78ec6e9f159549e7c8f9f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8c6a9c22daaf709556fb0d38e2f3bda48734851ecc67b0511dbc92999cabb6c\"" Jan 17 00:00:45.131422 containerd[2133]: time="2026-01-17T00:00:45.131369906Z" level=info msg="StartContainer for \"e8c6a9c22daaf709556fb0d38e2f3bda48734851ecc67b0511dbc92999cabb6c\"" Jan 17 00:00:45.138511 containerd[2133]: time="2026-01-17T00:00:45.138306375Z" level=info msg="CreateContainer within sandbox \"a10b84aba8c8edf4da029f7aa39f195f44813f21a34a6def94d63fb60ea31175\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49cda2464739e4f3d75959b2e690be2872b9b42d3ea533c66438e872be6baa2b\"" Jan 17 00:00:45.140979 containerd[2133]: time="2026-01-17T00:00:45.140839635Z" level=info msg="StartContainer for \"49cda2464739e4f3d75959b2e690be2872b9b42d3ea533c66438e872be6baa2b\"" Jan 17 00:00:45.190748 kubelet[3013]: W0117 00:00:45.190546 3013 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.5:6443: connect: connection refused Jan 17 00:00:45.190748 kubelet[3013]: E0117 00:00:45.190671 3013 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.5:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:00:45.248153 kubelet[3013]: I0117 00:00:45.247443 3013 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-5" Jan 17 00:00:45.248153 kubelet[3013]: E0117 00:00:45.247927 3013 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.5:6443/api/v1/nodes\": dial tcp 172.31.23.5:6443: connect: connection refused" node="ip-172-31-23-5" Jan 17 00:00:45.300134 containerd[2133]: time="2026-01-17T00:00:45.299614311Z" level=info msg="StartContainer for \"04b26166b305141181815827f66250e8e3cba276f9e69e3a018561decb3caecd\" returns successfully" Jan 17 00:00:45.382252 containerd[2133]: time="2026-01-17T00:00:45.381610672Z" level=info msg="StartContainer for \"e8c6a9c22daaf709556fb0d38e2f3bda48734851ecc67b0511dbc92999cabb6c\" returns successfully" Jan 17 00:00:45.413102 containerd[2133]: time="2026-01-17T00:00:45.412978168Z" level=info msg="StartContainer for \"49cda2464739e4f3d75959b2e690be2872b9b42d3ea533c66438e872be6baa2b\" returns successfully" Jan 17 00:00:45.726028 kubelet[3013]: E0117 00:00:45.725214 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:45.734039 kubelet[3013]: E0117 00:00:45.732623 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:45.739108 kubelet[3013]: E0117 00:00:45.737435 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:46.742591 kubelet[3013]: E0117 00:00:46.741420 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:46.744663 kubelet[3013]: E0117 00:00:46.744615 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:46.853162 kubelet[3013]: I0117 00:00:46.852504 3013 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-5" Jan 17 00:00:47.407049 kubelet[3013]: E0117 00:00:47.405682 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:47.746166 kubelet[3013]: E0117 00:00:47.744568 3013 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:49.018046 kubelet[3013]: E0117 00:00:49.017968 3013 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-5\" not found" node="ip-172-31-23-5" Jan 17 00:00:49.129194 kubelet[3013]: I0117 00:00:49.127472 3013 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-5" Jan 17 00:00:49.129194 kubelet[3013]: E0117 00:00:49.127530 3013 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-5\": node \"ip-172-31-23-5\" not found" Jan 17 00:00:49.129547 kubelet[3013]: I0117 00:00:49.129514 3013 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:49.234551 kubelet[3013]: E0117 00:00:49.234411 3013 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-5.188b5ba529c67d4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-5,UID:ip-172-31-23-5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-5,},FirstTimestamp:2026-01-17 00:00:43.601657167 +0000 UTC m=+2.524989050,LastTimestamp:2026-01-17 00:00:43.601657167 +0000 UTC m=+2.524989050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-5,}" Jan 17 00:00:49.312466 kubelet[3013]: E0117 00:00:49.312329 3013 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:49.315030 kubelet[3013]: I0117 00:00:49.313087 3013 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-5" Jan 17 00:00:49.321139 kubelet[3013]: E0117 00:00:49.320712 3013 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-5" Jan 17 00:00:49.321139 kubelet[3013]: I0117 00:00:49.320764 3013 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:49.342028 kubelet[3013]: E0117 00:00:49.341952 3013 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:49.616516 kubelet[3013]: I0117 00:00:49.616202 3013 apiserver.go:52] "Watching apiserver" Jan 17 00:00:49.630085 kubelet[3013]: I0117 00:00:49.629991 3013 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:00:50.818310 kubelet[3013]: I0117 00:00:50.818249 3013 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:51.056518 systemd[1]: Reloading requested from client PID 3285 ('systemctl') (unit session-7.scope)... Jan 17 00:00:51.056552 systemd[1]: Reloading... Jan 17 00:00:51.316321 update_engine[2095]: I20260117 00:00:51.314043 2095 update_attempter.cc:509] Updating boot flags... Jan 17 00:00:51.346033 zram_generator::config[3325]: No configuration found. Jan 17 00:00:51.393757 kubelet[3013]: I0117 00:00:51.392840 3013 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-5" Jan 17 00:00:51.559145 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3367) Jan 17 00:00:51.915125 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3367) Jan 17 00:00:51.930242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:52.169411 systemd[1]: Reloading finished in 1112 ms. Jan 17 00:00:52.406343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:52.441219 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:00:52.441856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:52.455657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:52.815815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:52.831843 (kubelet)[3579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:00:52.948849 kubelet[3579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:00:52.948849 kubelet[3579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:00:52.951056 kubelet[3579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:00:52.951056 kubelet[3579]: I0117 00:00:52.949692 3579 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:00:52.957849 sudo[3590]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:00:52.958696 sudo[3590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:00:52.965333 kubelet[3579]: I0117 00:00:52.965287 3579 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:00:52.965490 kubelet[3579]: I0117 00:00:52.965470 3579 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:00:52.966834 kubelet[3579]: I0117 00:00:52.966794 3579 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:00:52.969816 kubelet[3579]: I0117 00:00:52.969776 3579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:00:52.985039 kubelet[3579]: I0117 00:00:52.983921 3579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:00:52.992468 kubelet[3579]: E0117 00:00:52.992406 3579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:00:52.992730 kubelet[3579]: I0117 00:00:52.992707 3579 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:00:52.998496 kubelet[3579]: I0117 00:00:52.998450 3579 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:00:52.999684 kubelet[3579]: I0117 00:00:52.999637 3579 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:00:53.000146 kubelet[3579]: I0117 00:00:52.999791 3579 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:00:53.000405 kubelet[3579]: I0117 00:00:53.000382 3579 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:00:53.000640 kubelet[3579]: I0117 00:00:53.000484 3579 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:00:53.000640 kubelet[3579]: I0117 00:00:53.000575 3579 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:00:53.000996 kubelet[3579]: I0117 00:00:53.000976 3579 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:00:53.001963 kubelet[3579]: I0117 00:00:53.001937 3579 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:00:53.002111 kubelet[3579]: I0117 00:00:53.002093 3579 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:00:53.002240 kubelet[3579]: I0117 00:00:53.002220 3579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:00:53.004649 kubelet[3579]: I0117 00:00:53.004614 3579 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:00:53.006045 kubelet[3579]: I0117 00:00:53.005527 3579 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:00:53.008425 kubelet[3579]: I0117 00:00:53.008393 3579 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:00:53.008598 kubelet[3579]: I0117 00:00:53.008580 3579 server.go:1287] "Started kubelet" Jan 17 00:00:53.024304 kubelet[3579]: I0117 00:00:53.024266 3579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:00:53.045849 kubelet[3579]: I0117 00:00:53.026371 3579 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:00:53.059772 kubelet[3579]: I0117 00:00:53.059735 3579 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:00:53.103354 kubelet[3579]: I0117 00:00:53.026473 3579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:00:53.106028 kubelet[3579]: I0117 00:00:53.103852 3579 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:00:53.107147 kubelet[3579]: I0117 00:00:53.054190 3579 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:00:53.109108 kubelet[3579]: E0117 00:00:53.054600 3579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-5\" not found" Jan 17 00:00:53.109268 kubelet[3579]: I0117 00:00:53.027045 3579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:00:53.141145 kubelet[3579]: I0117 00:00:53.054160 3579 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:00:53.153108 kubelet[3579]: I0117 00:00:53.152779 3579 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:00:53.166239 kubelet[3579]: I0117 00:00:53.166202 3579 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:00:53.166397 kubelet[3579]: I0117 00:00:53.166378 3579 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:00:53.166636 kubelet[3579]: I0117 00:00:53.166600 3579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:00:53.169056 kubelet[3579]: I0117 00:00:53.168763 3579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:00:53.173816 kubelet[3579]: I0117 00:00:53.173773 3579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:00:53.174533 kubelet[3579]: I0117 00:00:53.173953 3579 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:00:53.174533 kubelet[3579]: I0117 00:00:53.173991 3579 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:00:53.174533 kubelet[3579]: I0117 00:00:53.174041 3579 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:00:53.174533 kubelet[3579]: E0117 00:00:53.174125 3579 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:00:53.274664 kubelet[3579]: E0117 00:00:53.274613 3579 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:00:53.336988 kubelet[3579]: I0117 00:00:53.336954 3579 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:00:53.337661 kubelet[3579]: I0117 00:00:53.337361 3579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:00:53.337661 kubelet[3579]: I0117 00:00:53.337434 3579 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:00:53.338166 kubelet[3579]: I0117 00:00:53.337971 3579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:00:53.338166 kubelet[3579]: I0117 00:00:53.338083 3579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:00:53.338166 kubelet[3579]: I0117 00:00:53.338124 3579 policy_none.go:49] "None policy: Start" Jan 17 00:00:53.338519 kubelet[3579]: I0117 00:00:53.338144 3579 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:00:53.338519 kubelet[3579]: I0117 00:00:53.338378 3579 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:00:53.338915 kubelet[3579]: I0117 00:00:53.338770 3579 state_mem.go:75] "Updated machine memory state" Jan 17 00:00:53.346276 kubelet[3579]: I0117 00:00:53.344574 3579 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:00:53.347179 kubelet[3579]: I0117 00:00:53.346973 3579 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:00:53.349117 kubelet[3579]: I0117 00:00:53.348150 3579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:00:53.349117 kubelet[3579]: I0117 00:00:53.348659 3579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:00:53.357566 kubelet[3579]: E0117 00:00:53.356921 3579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:00:53.470578 kubelet[3579]: I0117 00:00:53.469972 3579 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-5" Jan 17 00:00:53.477847 kubelet[3579]: I0117 00:00:53.476742 3579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:53.477847 kubelet[3579]: I0117 00:00:53.477454 3579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-5" Jan 17 00:00:53.478084 kubelet[3579]: I0117 00:00:53.477891 3579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:53.489609 kubelet[3579]: I0117 00:00:53.489428 3579 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-5" Jan 17 00:00:53.492042 kubelet[3579]: I0117 00:00:53.490880 3579 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-5" Jan 17 00:00:53.494951 kubelet[3579]: E0117 00:00:53.494086 3579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-5\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-5" Jan 17 00:00:53.496815 kubelet[3579]: E0117 00:00:53.495935 3579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-5\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:53.556310 kubelet[3579]: I0117 00:00:53.556202 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4c63bdc501063239dd843702ee94ebc-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-5\" (UID: \"e4c63bdc501063239dd843702ee94ebc\") " pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:53.557284 kubelet[3579]: I0117 00:00:53.556602 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:53.557284 kubelet[3579]: I0117 00:00:53.557230 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:53.557708 kubelet[3579]: I0117 00:00:53.557500 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:53.557708 kubelet[3579]: I0117 00:00:53.557581 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:53.558535 kubelet[3579]: I0117 00:00:53.557669 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a9167220dd3ffa31310f2f52de88523-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-5\" (UID: \"2a9167220dd3ffa31310f2f52de88523\") " pod="kube-system/kube-controller-manager-ip-172-31-23-5" Jan 17 00:00:53.558535 kubelet[3579]: I0117 00:00:53.557899 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3e4d5ab6f4a2cf637b6cad828c20a8c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-5\" (UID: \"f3e4d5ab6f4a2cf637b6cad828c20a8c\") " pod="kube-system/kube-scheduler-ip-172-31-23-5" Jan 17 00:00:53.558535 kubelet[3579]: I0117 00:00:53.558410 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4c63bdc501063239dd843702ee94ebc-ca-certs\") pod \"kube-apiserver-ip-172-31-23-5\" (UID: \"e4c63bdc501063239dd843702ee94ebc\") " pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:53.558535 kubelet[3579]: I0117 00:00:53.558469 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4c63bdc501063239dd843702ee94ebc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-5\" (UID: \"e4c63bdc501063239dd843702ee94ebc\") " pod="kube-system/kube-apiserver-ip-172-31-23-5" Jan 17 00:00:53.945925 sudo[3590]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:54.017048 kubelet[3579]: I0117 00:00:54.016915 3579 apiserver.go:52] "Watching apiserver" Jan 17 00:00:54.109710 kubelet[3579]: I0117 00:00:54.109602 3579 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:00:54.276951 kubelet[3579]: I0117 00:00:54.276511 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-5" podStartSLOduration=1.276490104 podStartE2EDuration="1.276490104s" podCreationTimestamp="2026-01-17 00:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:54.274104684 +0000 UTC m=+1.429879328" watchObservedRunningTime="2026-01-17 00:00:54.276490104 +0000 UTC m=+1.432264736" Jan 17 00:00:54.325044 kubelet[3579]: I0117 00:00:54.324904 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-5" podStartSLOduration=3.324882972 podStartE2EDuration="3.324882972s" podCreationTimestamp="2026-01-17 00:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:54.302731704 +0000 UTC m=+1.458506360" watchObservedRunningTime="2026-01-17 00:00:54.324882972 +0000 UTC m=+1.480657604" Jan 17 00:00:56.186517 kubelet[3579]: I0117 00:00:56.186398 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-5" podStartSLOduration=6.186265585 podStartE2EDuration="6.186265585s" podCreationTimestamp="2026-01-17 00:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:54.324825948 +0000 UTC m=+1.480600604" watchObservedRunningTime="2026-01-17 00:00:56.186265585 +0000 UTC m=+3.342040229" Jan 17 00:00:56.901771 kubelet[3579]: I0117 00:00:56.901731 3579 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:00:56.903281 containerd[2133]: time="2026-01-17T00:00:56.903036029Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:00:56.904452 kubelet[3579]: I0117 00:00:56.903941 3579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:00:57.074136 sudo[2480]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:57.151636 sshd[2476]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:57.157745 systemd[1]: sshd@6-172.31.23.5:22-68.220.241.50:52526.service: Deactivated successfully. Jan 17 00:00:57.166574 systemd-logind[2092]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:00:57.166881 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:00:57.173581 systemd-logind[2092]: Removed session 7. Jan 17 00:00:57.802808 kubelet[3579]: I0117 00:00:57.801488 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cni-path\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.802808 kubelet[3579]: I0117 00:00:57.801588 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-config-path\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.802808 kubelet[3579]: I0117 00:00:57.801638 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-xtables-lock\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.802808 kubelet[3579]: I0117 00:00:57.801680 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-kernel\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.802808 kubelet[3579]: I0117 00:00:57.801733 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdcm9\" (UniqueName: \"kubernetes.io/projected/e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de-kube-api-access-wdcm9\") pod \"kube-proxy-bvxtt\" (UID: \"e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de\") " pod="kube-system/kube-proxy-bvxtt" Jan 17 00:00:57.805386 kubelet[3579]: I0117 00:00:57.801788 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-hubble-tls\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.805386 kubelet[3579]: I0117 00:00:57.801840 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r78sx\" (UniqueName: \"kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-kube-api-access-r78sx\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.805386 kubelet[3579]: I0117 00:00:57.801882 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de-lib-modules\") pod \"kube-proxy-bvxtt\" (UID: \"e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de\") " pod="kube-system/kube-proxy-bvxtt" Jan 17 00:00:57.805386 kubelet[3579]: I0117 00:00:57.801933 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-etc-cni-netd\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.805386 kubelet[3579]: I0117 00:00:57.801983 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-net\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.807064 kubelet[3579]: I0117 00:00:57.805910 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-hostproc\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.807064 kubelet[3579]: I0117 00:00:57.806163 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-lib-modules\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.807064 kubelet[3579]: I0117 00:00:57.806216 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de-kube-proxy\") pod \"kube-proxy-bvxtt\" (UID: \"e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de\") " pod="kube-system/kube-proxy-bvxtt" Jan 17 00:00:57.807064 kubelet[3579]: I0117 00:00:57.806267 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de-xtables-lock\") pod \"kube-proxy-bvxtt\" (UID: \"e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de\") " pod="kube-system/kube-proxy-bvxtt" Jan 17 00:00:57.807064 kubelet[3579]: I0117 00:00:57.806323 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-bpf-maps\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.807064 kubelet[3579]: I0117 00:00:57.806370 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-cgroup\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.807453 kubelet[3579]: I0117 00:00:57.806409 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-run\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:57.807453 kubelet[3579]: I0117 00:00:57.806458 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b307e59c-a025-4721-9aa6-e880c508ca8b-clustermesh-secrets\") pod \"cilium-7hkpt\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " pod="kube-system/cilium-7hkpt" Jan 17 00:00:58.022031 containerd[2133]: time="2026-01-17T00:00:58.021926847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hkpt,Uid:b307e59c-a025-4721-9aa6-e880c508ca8b,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:58.067988 kubelet[3579]: I0117 00:00:58.067794 3579 status_manager.go:890] "Failed to get status for pod" podUID="6de5e380-98a4-42c5-8b59-6b5d3a3c5436" pod="kube-system/cilium-operator-6c4d7847fc-xx59c" err="pods \"cilium-operator-6c4d7847fc-xx59c\" is forbidden: User \"system:node:ip-172-31-23-5\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-5' and this object" Jan 17 00:00:58.148055 containerd[2133]: time="2026-01-17T00:00:58.147535959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:58.148055 containerd[2133]: time="2026-01-17T00:00:58.147655779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:58.148055 containerd[2133]: time="2026-01-17T00:00:58.147695031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:58.149154 containerd[2133]: time="2026-01-17T00:00:58.147998127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:58.208959 kubelet[3579]: I0117 00:00:58.208785 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65z2l\" (UniqueName: \"kubernetes.io/projected/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-kube-api-access-65z2l\") pod \"cilium-operator-6c4d7847fc-xx59c\" (UID: \"6de5e380-98a4-42c5-8b59-6b5d3a3c5436\") " pod="kube-system/cilium-operator-6c4d7847fc-xx59c" Jan 17 00:00:58.208959 kubelet[3579]: I0117 00:00:58.208866 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xx59c\" (UID: \"6de5e380-98a4-42c5-8b59-6b5d3a3c5436\") " pod="kube-system/cilium-operator-6c4d7847fc-xx59c" Jan 17 00:00:58.233127 containerd[2133]: time="2026-01-17T00:00:58.232484860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hkpt,Uid:b307e59c-a025-4721-9aa6-e880c508ca8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\"" Jan 17 00:00:58.236574 containerd[2133]: time="2026-01-17T00:00:58.236503132Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:00:58.296423 containerd[2133]: time="2026-01-17T00:00:58.296365948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvxtt,Uid:e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:58.351729 containerd[2133]: time="2026-01-17T00:00:58.351538240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:58.351729 containerd[2133]: time="2026-01-17T00:00:58.351638272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:58.353282 containerd[2133]: time="2026-01-17T00:00:58.352906684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:58.353282 containerd[2133]: time="2026-01-17T00:00:58.353220532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:58.382168 containerd[2133]: time="2026-01-17T00:00:58.382103740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xx59c,Uid:6de5e380-98a4-42c5-8b59-6b5d3a3c5436,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:58.432424 containerd[2133]: time="2026-01-17T00:00:58.432367157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvxtt,Uid:e1b4f199-f2e1-4aae-b7e7-6e4ec8d9a0de,Namespace:kube-system,Attempt:0,} returns sandbox id \"89a8542f3888e0510d09e05aadd071060767b228de2cd747b5dc02e440b21d02\"" Jan 17 00:00:58.440676 containerd[2133]: time="2026-01-17T00:00:58.440618897Z" level=info msg="CreateContainer within sandbox \"89a8542f3888e0510d09e05aadd071060767b228de2cd747b5dc02e440b21d02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:00:58.457157 containerd[2133]: time="2026-01-17T00:00:58.452440709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:58.457157 containerd[2133]: time="2026-01-17T00:00:58.452566961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:58.457157 containerd[2133]: time="2026-01-17T00:00:58.452606741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:58.457157 containerd[2133]: time="2026-01-17T00:00:58.453705293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:58.478709 containerd[2133]: time="2026-01-17T00:00:58.478061021Z" level=info msg="CreateContainer within sandbox \"89a8542f3888e0510d09e05aadd071060767b228de2cd747b5dc02e440b21d02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3961ff4eb60c678ae3ebc22b5ed85904a2b45527a188c5845c2407e809c5e05a\"" Jan 17 00:00:58.481972 containerd[2133]: time="2026-01-17T00:00:58.481802561Z" level=info msg="StartContainer for \"3961ff4eb60c678ae3ebc22b5ed85904a2b45527a188c5845c2407e809c5e05a\"" Jan 17 00:00:58.580224 containerd[2133]: time="2026-01-17T00:00:58.579368837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xx59c,Uid:6de5e380-98a4-42c5-8b59-6b5d3a3c5436,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\"" Jan 17 00:00:58.625456 containerd[2133]: time="2026-01-17T00:00:58.624322145Z" level=info msg="StartContainer for \"3961ff4eb60c678ae3ebc22b5ed85904a2b45527a188c5845c2407e809c5e05a\" returns successfully" Jan 17 00:00:59.303225 kubelet[3579]: I0117 00:00:59.303079 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bvxtt" podStartSLOduration=2.303053525 podStartE2EDuration="2.303053525s" podCreationTimestamp="2026-01-17 00:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:59.280877573 +0000 UTC m=+6.436652289" watchObservedRunningTime="2026-01-17 00:00:59.303053525 +0000 UTC m=+6.458828181" Jan 17 00:01:03.575121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2330903944.mount: Deactivated successfully. Jan 17 00:01:06.281678 containerd[2133]: time="2026-01-17T00:01:06.281595888Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:06.286212 containerd[2133]: time="2026-01-17T00:01:06.286096068Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 17 00:01:06.288824 containerd[2133]: time="2026-01-17T00:01:06.288592344Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:06.294312 containerd[2133]: time="2026-01-17T00:01:06.292937628Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.056357432s" Jan 17 00:01:06.294312 containerd[2133]: time="2026-01-17T00:01:06.293042244Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 00:01:06.296774 containerd[2133]: time="2026-01-17T00:01:06.295963908Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:01:06.303220 containerd[2133]: time="2026-01-17T00:01:06.301643448Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:01:06.329219 containerd[2133]: time="2026-01-17T00:01:06.329153436Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\"" Jan 17 00:01:06.330654 containerd[2133]: time="2026-01-17T00:01:06.330599748Z" level=info msg="StartContainer for \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\"" Jan 17 00:01:06.439095 containerd[2133]: time="2026-01-17T00:01:06.438079464Z" level=info msg="StartContainer for \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\" returns successfully" Jan 17 00:01:07.324723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe-rootfs.mount: Deactivated successfully. Jan 17 00:01:07.651177 containerd[2133]: time="2026-01-17T00:01:07.650516090Z" level=info msg="shim disconnected" id=f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe namespace=k8s.io Jan 17 00:01:07.651177 containerd[2133]: time="2026-01-17T00:01:07.650588294Z" level=warning msg="cleaning up after shim disconnected" id=f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe namespace=k8s.io Jan 17 00:01:07.651177 containerd[2133]: time="2026-01-17T00:01:07.650609390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:01:08.165098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761602111.mount: Deactivated successfully. Jan 17 00:01:08.310639 containerd[2133]: time="2026-01-17T00:01:08.309947030Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:01:08.360422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868654434.mount: Deactivated successfully. Jan 17 00:01:08.377425 containerd[2133]: time="2026-01-17T00:01:08.377361914Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\"" Jan 17 00:01:08.379388 containerd[2133]: time="2026-01-17T00:01:08.379191686Z" level=info msg="StartContainer for \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\"" Jan 17 00:01:08.522693 containerd[2133]: time="2026-01-17T00:01:08.520697067Z" level=info msg="StartContainer for \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\" returns successfully" Jan 17 00:01:08.546339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:01:08.546949 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:08.547157 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:08.564297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:08.619867 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:08.669795 containerd[2133]: time="2026-01-17T00:01:08.669716919Z" level=info msg="shim disconnected" id=806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0 namespace=k8s.io Jan 17 00:01:08.670922 containerd[2133]: time="2026-01-17T00:01:08.670521579Z" level=warning msg="cleaning up after shim disconnected" id=806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0 namespace=k8s.io Jan 17 00:01:08.670922 containerd[2133]: time="2026-01-17T00:01:08.670561575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:01:09.103759 containerd[2133]: time="2026-01-17T00:01:09.103679186Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:09.105651 containerd[2133]: time="2026-01-17T00:01:09.105580370Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 17 00:01:09.109688 containerd[2133]: time="2026-01-17T00:01:09.109222850Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:09.112435 containerd[2133]: time="2026-01-17T00:01:09.112365158Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.816303174s" Jan 17 00:01:09.112582 containerd[2133]: time="2026-01-17T00:01:09.112433150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 00:01:09.117933 containerd[2133]: time="2026-01-17T00:01:09.117049178Z" level=info msg="CreateContainer within sandbox \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:01:09.141620 containerd[2133]: time="2026-01-17T00:01:09.141539210Z" level=info msg="CreateContainer within sandbox \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\"" Jan 17 00:01:09.143343 containerd[2133]: time="2026-01-17T00:01:09.142584254Z" level=info msg="StartContainer for \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\"" Jan 17 00:01:09.241468 containerd[2133]: time="2026-01-17T00:01:09.241405106Z" level=info msg="StartContainer for \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\" returns successfully" Jan 17 00:01:09.319624 containerd[2133]: time="2026-01-17T00:01:09.318976311Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:01:09.368590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0-rootfs.mount: Deactivated successfully. Jan 17 00:01:09.414814 kubelet[3579]: I0117 00:01:09.414722 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xx59c" podStartSLOduration=1.883629738 podStartE2EDuration="12.414698079s" podCreationTimestamp="2026-01-17 00:00:57 +0000 UTC" firstStartedPulling="2026-01-17 00:00:58.583375985 +0000 UTC m=+5.739150617" lastFinishedPulling="2026-01-17 00:01:09.114444326 +0000 UTC m=+16.270218958" observedRunningTime="2026-01-17 00:01:09.414584103 +0000 UTC m=+16.570358759" watchObservedRunningTime="2026-01-17 00:01:09.414698079 +0000 UTC m=+16.570472699" Jan 17 00:01:09.422738 containerd[2133]: time="2026-01-17T00:01:09.422666595Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\"" Jan 17 00:01:09.427938 containerd[2133]: time="2026-01-17T00:01:09.424110723Z" level=info msg="StartContainer for \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\"" Jan 17 00:01:09.634122 containerd[2133]: time="2026-01-17T00:01:09.633758416Z" level=info msg="StartContainer for \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\" returns successfully" Jan 17 00:01:09.810398 containerd[2133]: time="2026-01-17T00:01:09.810317837Z" level=info msg="shim disconnected" id=0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33 namespace=k8s.io Jan 17 00:01:09.812305 containerd[2133]: time="2026-01-17T00:01:09.812248997Z" level=warning msg="cleaning up after shim disconnected" id=0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33 namespace=k8s.io Jan 17 00:01:09.812470 containerd[2133]: time="2026-01-17T00:01:09.812441861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:01:10.360797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33-rootfs.mount: Deactivated successfully. Jan 17 00:01:10.381704 containerd[2133]: time="2026-01-17T00:01:10.381618052Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:01:10.420967 containerd[2133]: time="2026-01-17T00:01:10.418523752Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\"" Jan 17 00:01:10.428050 containerd[2133]: time="2026-01-17T00:01:10.424285864Z" level=info msg="StartContainer for \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\"" Jan 17 00:01:10.599049 containerd[2133]: time="2026-01-17T00:01:10.597654845Z" level=info msg="StartContainer for \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\" returns successfully" Jan 17 00:01:10.654393 containerd[2133]: time="2026-01-17T00:01:10.654177785Z" level=info msg="shim disconnected" id=94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda namespace=k8s.io Jan 17 00:01:10.654393 containerd[2133]: time="2026-01-17T00:01:10.654295913Z" level=warning msg="cleaning up after shim disconnected" id=94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda namespace=k8s.io Jan 17 00:01:10.654393 containerd[2133]: time="2026-01-17T00:01:10.654341489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:01:10.690043 containerd[2133]: time="2026-01-17T00:01:10.684598781Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:01:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:01:11.353886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda-rootfs.mount: Deactivated successfully. Jan 17 00:01:11.384440 containerd[2133]: time="2026-01-17T00:01:11.379372025Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:01:11.422117 containerd[2133]: time="2026-01-17T00:01:11.421987157Z" level=info msg="CreateContainer within sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\"" Jan 17 00:01:11.423706 containerd[2133]: time="2026-01-17T00:01:11.422762021Z" level=info msg="StartContainer for \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\"" Jan 17 00:01:11.561456 containerd[2133]: time="2026-01-17T00:01:11.560620650Z" level=info msg="StartContainer for \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\" returns successfully" Jan 17 00:01:11.949061 kubelet[3579]: I0117 00:01:11.948869 3579 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:01:12.113474 kubelet[3579]: I0117 00:01:12.113159 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/687373b4-5c38-49e9-8b9e-023c35971cba-config-volume\") pod \"coredns-668d6bf9bc-wq9g7\" (UID: \"687373b4-5c38-49e9-8b9e-023c35971cba\") " pod="kube-system/coredns-668d6bf9bc-wq9g7" Jan 17 00:01:12.113474 kubelet[3579]: I0117 00:01:12.113234 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxk2d\" (UniqueName: \"kubernetes.io/projected/687373b4-5c38-49e9-8b9e-023c35971cba-kube-api-access-lxk2d\") pod \"coredns-668d6bf9bc-wq9g7\" (UID: \"687373b4-5c38-49e9-8b9e-023c35971cba\") " pod="kube-system/coredns-668d6bf9bc-wq9g7" Jan 17 00:01:12.113474 kubelet[3579]: I0117 00:01:12.113305 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxq5x\" (UniqueName: \"kubernetes.io/projected/63b58f23-320b-4339-86fe-434bcea07ab1-kube-api-access-lxq5x\") pod \"coredns-668d6bf9bc-pv7jb\" (UID: \"63b58f23-320b-4339-86fe-434bcea07ab1\") " pod="kube-system/coredns-668d6bf9bc-pv7jb" Jan 17 00:01:12.113474 kubelet[3579]: I0117 00:01:12.113388 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63b58f23-320b-4339-86fe-434bcea07ab1-config-volume\") pod \"coredns-668d6bf9bc-pv7jb\" (UID: \"63b58f23-320b-4339-86fe-434bcea07ab1\") " pod="kube-system/coredns-668d6bf9bc-pv7jb" Jan 17 00:01:12.326060 containerd[2133]: time="2026-01-17T00:01:12.325397430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wq9g7,Uid:687373b4-5c38-49e9-8b9e-023c35971cba,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:12.350479 containerd[2133]: time="2026-01-17T00:01:12.349065678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pv7jb,Uid:63b58f23-320b-4339-86fe-434bcea07ab1,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:12.380562 systemd[1]: run-containerd-runc-k8s.io-958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08-runc.xmh3gv.mount: Deactivated successfully. Jan 17 00:01:14.352957 systemd-networkd[1690]: cilium_host: Link UP Jan 17 00:01:14.356576 (udev-worker)[4381]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:14.356945 systemd-networkd[1690]: cilium_net: Link UP Jan 17 00:01:14.357617 systemd-networkd[1690]: cilium_net: Gained carrier Jan 17 00:01:14.357954 systemd-networkd[1690]: cilium_host: Gained carrier Jan 17 00:01:14.360316 (udev-worker)[4382]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:14.539260 systemd-networkd[1690]: cilium_vxlan: Link UP Jan 17 00:01:14.539281 systemd-networkd[1690]: cilium_vxlan: Gained carrier Jan 17 00:01:14.842483 systemd-networkd[1690]: cilium_host: Gained IPv6LL Jan 17 00:01:15.090314 systemd-networkd[1690]: cilium_net: Gained IPv6LL Jan 17 00:01:15.115053 kernel: NET: Registered PF_ALG protocol family Jan 17 00:01:16.179229 systemd-networkd[1690]: cilium_vxlan: Gained IPv6LL Jan 17 00:01:16.497869 systemd-networkd[1690]: lxc_health: Link UP Jan 17 00:01:16.509841 (udev-worker)[4433]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:16.512592 systemd-networkd[1690]: lxc_health: Gained carrier Jan 17 00:01:16.963879 systemd-networkd[1690]: lxceeb9489c84a3: Link UP Jan 17 00:01:16.980374 kernel: eth0: renamed from tmpa8ecd Jan 17 00:01:16.981993 systemd-networkd[1690]: lxceeb9489c84a3: Gained carrier Jan 17 00:01:17.064640 systemd-networkd[1690]: lxc33dcd4cef335: Link UP Jan 17 00:01:17.077252 kernel: eth0: renamed from tmpb2a79 Jan 17 00:01:17.081738 (udev-worker)[4431]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:17.084248 systemd-networkd[1690]: lxc33dcd4cef335: Gained carrier Jan 17 00:01:18.058407 kubelet[3579]: I0117 00:01:18.058292 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7hkpt" podStartSLOduration=12.998552142 podStartE2EDuration="21.058269826s" podCreationTimestamp="2026-01-17 00:00:57 +0000 UTC" firstStartedPulling="2026-01-17 00:00:58.23565442 +0000 UTC m=+5.391429040" lastFinishedPulling="2026-01-17 00:01:06.295372092 +0000 UTC m=+13.451146724" observedRunningTime="2026-01-17 00:01:12.500599674 +0000 UTC m=+19.656374306" watchObservedRunningTime="2026-01-17 00:01:18.058269826 +0000 UTC m=+25.214044458" Jan 17 00:01:18.228147 systemd-networkd[1690]: lxc_health: Gained IPv6LL Jan 17 00:01:18.418719 systemd-networkd[1690]: lxceeb9489c84a3: Gained IPv6LL Jan 17 00:01:19.058425 systemd-networkd[1690]: lxc33dcd4cef335: Gained IPv6LL Jan 17 00:01:21.127219 ntpd[2083]: Listen normally on 6 cilium_host 192.168.0.89:123 Jan 17 00:01:21.128368 ntpd[2083]: 17 Jan 00:01:21 ntpd[2083]: Listen normally on 6 cilium_host 192.168.0.89:123 Jan 17 00:01:21.128368 ntpd[2083]: 17 Jan 00:01:21 ntpd[2083]: Listen normally on 7 cilium_net [fe80::c088:49ff:fe74:bc17%4]:123 Jan 17 00:01:21.128368 ntpd[2083]: 17 Jan 00:01:21 ntpd[2083]: Listen normally on 8 cilium_host [fe80::dca0:96ff:fea2:5aa9%5]:123 Jan 17 00:01:21.128368 ntpd[2083]: 17 Jan 00:01:21 ntpd[2083]: Listen normally on 9 cilium_vxlan [fe80::b8c4:6ff:fe81:f04b%6]:123 Jan 17 00:01:21.128368 ntpd[2083]: 17 Jan 00:01:21 ntpd[2083]: Listen normally on 10 lxc_health [fe80::78d9:5cff:fe15:b366%8]:123 Jan 17 00:01:21.128368 ntpd[2083]: 17 Jan 00:01:21 ntpd[2083]: Listen normally on 11 lxceeb9489c84a3 [fe80::9c0c:37ff:fe68:26c%10]:123 Jan 17 00:01:21.128368 ntpd[2083]: 17 Jan 00:01:21 ntpd[2083]: Listen normally on 12 lxc33dcd4cef335 [fe80::7c6b:5bff:fe9c:58c1%12]:123 Jan 17 00:01:21.127350 ntpd[2083]: Listen normally on 7 cilium_net [fe80::c088:49ff:fe74:bc17%4]:123 Jan 17 00:01:21.127444 ntpd[2083]: Listen normally on 8 cilium_host [fe80::dca0:96ff:fea2:5aa9%5]:123 Jan 17 00:01:21.127516 ntpd[2083]: Listen normally on 9 cilium_vxlan [fe80::b8c4:6ff:fe81:f04b%6]:123 Jan 17 00:01:21.127584 ntpd[2083]: Listen normally on 10 lxc_health [fe80::78d9:5cff:fe15:b366%8]:123 Jan 17 00:01:21.127651 ntpd[2083]: Listen normally on 11 lxceeb9489c84a3 [fe80::9c0c:37ff:fe68:26c%10]:123 Jan 17 00:01:21.127718 ntpd[2083]: Listen normally on 12 lxc33dcd4cef335 [fe80::7c6b:5bff:fe9c:58c1%12]:123 Jan 17 00:01:25.487905 containerd[2133]: time="2026-01-17T00:01:25.487313527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:25.487905 containerd[2133]: time="2026-01-17T00:01:25.487420411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:25.487905 containerd[2133]: time="2026-01-17T00:01:25.487457407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:25.487905 containerd[2133]: time="2026-01-17T00:01:25.487628587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:25.525507 containerd[2133]: time="2026-01-17T00:01:25.524796175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:25.526031 containerd[2133]: time="2026-01-17T00:01:25.525443023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:25.526566 containerd[2133]: time="2026-01-17T00:01:25.526180711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:25.526566 containerd[2133]: time="2026-01-17T00:01:25.526399363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:25.715829 containerd[2133]: time="2026-01-17T00:01:25.715622636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wq9g7,Uid:687373b4-5c38-49e9-8b9e-023c35971cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8ecd11574c90a868af74341130fea3b604b3cfd93ab35a38b86d85b1c1619ac\"" Jan 17 00:01:25.732117 containerd[2133]: time="2026-01-17T00:01:25.731491412Z" level=info msg="CreateContainer within sandbox \"a8ecd11574c90a868af74341130fea3b604b3cfd93ab35a38b86d85b1c1619ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:01:25.760556 containerd[2133]: time="2026-01-17T00:01:25.760419140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pv7jb,Uid:63b58f23-320b-4339-86fe-434bcea07ab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a79f24aad1704e78a965d5fac654a203a837c120f8bd2eb66b6f1ed4de1568\"" Jan 17 00:01:25.784401 containerd[2133]: time="2026-01-17T00:01:25.783836684Z" level=info msg="CreateContainer within sandbox \"b2a79f24aad1704e78a965d5fac654a203a837c120f8bd2eb66b6f1ed4de1568\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:01:25.801160 containerd[2133]: time="2026-01-17T00:01:25.801081992Z" level=info msg="CreateContainer within sandbox \"a8ecd11574c90a868af74341130fea3b604b3cfd93ab35a38b86d85b1c1619ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3491438ca690dc4354c32543e4b2a1428e4bf8f8b09cef5ef3f4e9831b13db3a\"" Jan 17 00:01:25.804052 containerd[2133]: time="2026-01-17T00:01:25.803303312Z" level=info msg="StartContainer for \"3491438ca690dc4354c32543e4b2a1428e4bf8f8b09cef5ef3f4e9831b13db3a\"" Jan 17 00:01:25.840041 containerd[2133]: time="2026-01-17T00:01:25.838436589Z" level=info msg="CreateContainer within sandbox \"b2a79f24aad1704e78a965d5fac654a203a837c120f8bd2eb66b6f1ed4de1568\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efeff5fdf951973261ca3fc43ce0b27721f1b2e5923ba1efe91f37e37f9fd928\"" Jan 17 00:01:25.846952 containerd[2133]: time="2026-01-17T00:01:25.846774189Z" level=info msg="StartContainer for \"efeff5fdf951973261ca3fc43ce0b27721f1b2e5923ba1efe91f37e37f9fd928\"" Jan 17 00:01:25.969412 containerd[2133]: time="2026-01-17T00:01:25.969260445Z" level=info msg="StartContainer for \"3491438ca690dc4354c32543e4b2a1428e4bf8f8b09cef5ef3f4e9831b13db3a\" returns successfully" Jan 17 00:01:25.997311 containerd[2133]: time="2026-01-17T00:01:25.997039425Z" level=info msg="StartContainer for \"efeff5fdf951973261ca3fc43ce0b27721f1b2e5923ba1efe91f37e37f9fd928\" returns successfully" Jan 17 00:01:26.538870 kubelet[3579]: I0117 00:01:26.538777 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wq9g7" podStartSLOduration=29.538751996 podStartE2EDuration="29.538751996s" podCreationTimestamp="2026-01-17 00:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:26.536382752 +0000 UTC m=+33.692157408" watchObservedRunningTime="2026-01-17 00:01:26.538751996 +0000 UTC m=+33.694526628" Jan 17 00:01:26.540625 kubelet[3579]: I0117 00:01:26.538975 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pv7jb" podStartSLOduration=29.538964504 podStartE2EDuration="29.538964504s" podCreationTimestamp="2026-01-17 00:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:26.506059268 +0000 UTC m=+33.661833936" watchObservedRunningTime="2026-01-17 00:01:26.538964504 +0000 UTC m=+33.694739160" Jan 17 00:01:36.751459 systemd[1]: Started sshd@7-172.31.23.5:22-68.220.241.50:60498.service - OpenSSH per-connection server daemon (68.220.241.50:60498). Jan 17 00:01:37.304429 sshd[4953]: Accepted publickey for core from 68.220.241.50 port 60498 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:37.307160 sshd[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:37.315186 systemd-logind[2092]: New session 8 of user core. Jan 17 00:01:37.320866 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:01:37.827190 sshd[4953]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:37.835718 systemd[1]: sshd@7-172.31.23.5:22-68.220.241.50:60498.service: Deactivated successfully. Jan 17 00:01:37.841923 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:01:37.843706 systemd-logind[2092]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:01:37.845863 systemd-logind[2092]: Removed session 8. Jan 17 00:01:42.916486 systemd[1]: Started sshd@8-172.31.23.5:22-68.220.241.50:59096.service - OpenSSH per-connection server daemon (68.220.241.50:59096). Jan 17 00:01:43.460509 sshd[4967]: Accepted publickey for core from 68.220.241.50 port 59096 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:43.463243 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:43.472078 systemd-logind[2092]: New session 9 of user core. Jan 17 00:01:43.475610 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:01:43.956295 sshd[4967]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:43.964118 systemd[1]: sshd@8-172.31.23.5:22-68.220.241.50:59096.service: Deactivated successfully. Jan 17 00:01:43.967134 systemd-logind[2092]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:01:43.975603 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:01:43.977638 systemd-logind[2092]: Removed session 9. Jan 17 00:01:49.048613 systemd[1]: Started sshd@9-172.31.23.5:22-68.220.241.50:59106.service - OpenSSH per-connection server daemon (68.220.241.50:59106). Jan 17 00:01:49.590266 sshd[4981]: Accepted publickey for core from 68.220.241.50 port 59106 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:49.592896 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:49.601602 systemd-logind[2092]: New session 10 of user core. Jan 17 00:01:49.608803 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:01:50.086758 sshd[4981]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:50.092974 systemd[1]: sshd@9-172.31.23.5:22-68.220.241.50:59106.service: Deactivated successfully. Jan 17 00:01:50.100184 systemd-logind[2092]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:01:50.101237 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:01:50.103880 systemd-logind[2092]: Removed session 10. Jan 17 00:01:55.169174 systemd[1]: Started sshd@10-172.31.23.5:22-68.220.241.50:47134.service - OpenSSH per-connection server daemon (68.220.241.50:47134). Jan 17 00:01:55.680193 sshd[4999]: Accepted publickey for core from 68.220.241.50 port 47134 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:55.682924 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:55.690909 systemd-logind[2092]: New session 11 of user core. Jan 17 00:01:55.700468 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:01:56.159310 sshd[4999]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:56.168468 systemd[1]: sshd@10-172.31.23.5:22-68.220.241.50:47134.service: Deactivated successfully. Jan 17 00:01:56.175734 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:01:56.177910 systemd-logind[2092]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:01:56.179653 systemd-logind[2092]: Removed session 11. Jan 17 00:01:56.257491 systemd[1]: Started sshd@11-172.31.23.5:22-68.220.241.50:47140.service - OpenSSH per-connection server daemon (68.220.241.50:47140). Jan 17 00:01:56.795516 sshd[5014]: Accepted publickey for core from 68.220.241.50 port 47140 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:56.798393 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:56.808540 systemd-logind[2092]: New session 12 of user core. Jan 17 00:01:56.812540 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:01:57.386901 sshd[5014]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:57.392178 systemd-logind[2092]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:01:57.396565 systemd[1]: sshd@11-172.31.23.5:22-68.220.241.50:47140.service: Deactivated successfully. Jan 17 00:01:57.403799 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:01:57.406730 systemd-logind[2092]: Removed session 12. Jan 17 00:01:57.477685 systemd[1]: Started sshd@12-172.31.23.5:22-68.220.241.50:47150.service - OpenSSH per-connection server daemon (68.220.241.50:47150). Jan 17 00:01:58.013916 sshd[5026]: Accepted publickey for core from 68.220.241.50 port 47150 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:58.017583 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:58.029314 systemd-logind[2092]: New session 13 of user core. Jan 17 00:01:58.037671 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:01:58.504350 sshd[5026]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:58.511721 systemd[1]: sshd@12-172.31.23.5:22-68.220.241.50:47150.service: Deactivated successfully. Jan 17 00:01:58.520087 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:01:58.523325 systemd-logind[2092]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:01:58.525747 systemd-logind[2092]: Removed session 13. Jan 17 00:02:03.603537 systemd[1]: Started sshd@13-172.31.23.5:22-68.220.241.50:44898.service - OpenSSH per-connection server daemon (68.220.241.50:44898). Jan 17 00:02:04.133719 sshd[5042]: Accepted publickey for core from 68.220.241.50 port 44898 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:04.136419 sshd[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:04.144754 systemd-logind[2092]: New session 14 of user core. Jan 17 00:02:04.152721 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:02:04.631375 sshd[5042]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:04.638919 systemd[1]: sshd@13-172.31.23.5:22-68.220.241.50:44898.service: Deactivated successfully. Jan 17 00:02:04.639756 systemd-logind[2092]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:02:04.646274 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:02:04.649392 systemd-logind[2092]: Removed session 14. Jan 17 00:02:09.713578 systemd[1]: Started sshd@14-172.31.23.5:22-68.220.241.50:44914.service - OpenSSH per-connection server daemon (68.220.241.50:44914). Jan 17 00:02:10.206062 sshd[5056]: Accepted publickey for core from 68.220.241.50 port 44914 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:10.209520 sshd[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:10.218405 systemd-logind[2092]: New session 15 of user core. Jan 17 00:02:10.226632 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:02:10.676370 sshd[5056]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:10.681582 systemd[1]: sshd@14-172.31.23.5:22-68.220.241.50:44914.service: Deactivated successfully. Jan 17 00:02:10.690465 systemd-logind[2092]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:02:10.691982 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:02:10.694508 systemd-logind[2092]: Removed session 15. Jan 17 00:02:15.764483 systemd[1]: Started sshd@15-172.31.23.5:22-68.220.241.50:59182.service - OpenSSH per-connection server daemon (68.220.241.50:59182). Jan 17 00:02:16.270339 sshd[5072]: Accepted publickey for core from 68.220.241.50 port 59182 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:16.274315 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:16.290411 systemd-logind[2092]: New session 16 of user core. Jan 17 00:02:16.298529 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:02:16.754243 sshd[5072]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:16.761680 systemd[1]: sshd@15-172.31.23.5:22-68.220.241.50:59182.service: Deactivated successfully. Jan 17 00:02:16.762076 systemd-logind[2092]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:02:16.770095 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:02:16.772426 systemd-logind[2092]: Removed session 16. Jan 17 00:02:16.852733 systemd[1]: Started sshd@16-172.31.23.5:22-68.220.241.50:59188.service - OpenSSH per-connection server daemon (68.220.241.50:59188). Jan 17 00:02:17.383575 sshd[5086]: Accepted publickey for core from 68.220.241.50 port 59188 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:17.386265 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:17.394581 systemd-logind[2092]: New session 17 of user core. Jan 17 00:02:17.399273 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:02:17.957252 sshd[5086]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:17.964823 systemd[1]: sshd@16-172.31.23.5:22-68.220.241.50:59188.service: Deactivated successfully. Jan 17 00:02:17.971976 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:02:17.974094 systemd-logind[2092]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:02:17.976719 systemd-logind[2092]: Removed session 17. Jan 17 00:02:18.037485 systemd[1]: Started sshd@17-172.31.23.5:22-68.220.241.50:59196.service - OpenSSH per-connection server daemon (68.220.241.50:59196). Jan 17 00:02:18.550550 sshd[5097]: Accepted publickey for core from 68.220.241.50 port 59196 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:18.553457 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:18.562253 systemd-logind[2092]: New session 18 of user core. Jan 17 00:02:18.573553 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:02:19.698080 sshd[5097]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:19.706204 systemd[1]: sshd@17-172.31.23.5:22-68.220.241.50:59196.service: Deactivated successfully. Jan 17 00:02:19.713558 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:02:19.715186 systemd-logind[2092]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:02:19.718796 systemd-logind[2092]: Removed session 18. Jan 17 00:02:19.795508 systemd[1]: Started sshd@18-172.31.23.5:22-68.220.241.50:59200.service - OpenSSH per-connection server daemon (68.220.241.50:59200). Jan 17 00:02:20.333631 sshd[5116]: Accepted publickey for core from 68.220.241.50 port 59200 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:20.336306 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:20.344084 systemd-logind[2092]: New session 19 of user core. Jan 17 00:02:20.352492 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:02:21.066412 sshd[5116]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:21.073679 systemd-logind[2092]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:02:21.074712 systemd[1]: sshd@18-172.31.23.5:22-68.220.241.50:59200.service: Deactivated successfully. Jan 17 00:02:21.082221 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:02:21.084636 systemd-logind[2092]: Removed session 19. Jan 17 00:02:21.156485 systemd[1]: Started sshd@19-172.31.23.5:22-68.220.241.50:59216.service - OpenSSH per-connection server daemon (68.220.241.50:59216). Jan 17 00:02:21.698511 sshd[5128]: Accepted publickey for core from 68.220.241.50 port 59216 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:21.701209 sshd[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:21.710106 systemd-logind[2092]: New session 20 of user core. Jan 17 00:02:21.715802 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:02:22.185362 sshd[5128]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:22.192801 systemd[1]: sshd@19-172.31.23.5:22-68.220.241.50:59216.service: Deactivated successfully. Jan 17 00:02:22.201711 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:02:22.203368 systemd-logind[2092]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:02:22.205666 systemd-logind[2092]: Removed session 20. Jan 17 00:02:27.265437 systemd[1]: Started sshd@20-172.31.23.5:22-68.220.241.50:42190.service - OpenSSH per-connection server daemon (68.220.241.50:42190). Jan 17 00:02:27.777614 sshd[5144]: Accepted publickey for core from 68.220.241.50 port 42190 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:27.781300 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:27.793294 systemd-logind[2092]: New session 21 of user core. Jan 17 00:02:27.798564 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:02:28.252347 sshd[5144]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:28.260178 systemd-logind[2092]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:02:28.261335 systemd[1]: sshd@20-172.31.23.5:22-68.220.241.50:42190.service: Deactivated successfully. Jan 17 00:02:28.267997 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:02:28.273769 systemd-logind[2092]: Removed session 21. Jan 17 00:02:33.349464 systemd[1]: Started sshd@21-172.31.23.5:22-68.220.241.50:41116.service - OpenSSH per-connection server daemon (68.220.241.50:41116). Jan 17 00:02:33.894905 sshd[5160]: Accepted publickey for core from 68.220.241.50 port 41116 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:33.897520 sshd[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:33.905328 systemd-logind[2092]: New session 22 of user core. Jan 17 00:02:33.909815 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:02:34.395371 sshd[5160]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:34.403427 systemd-logind[2092]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:02:34.404395 systemd[1]: sshd@21-172.31.23.5:22-68.220.241.50:41116.service: Deactivated successfully. Jan 17 00:02:34.411311 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:02:34.413492 systemd-logind[2092]: Removed session 22. Jan 17 00:02:39.488501 systemd[1]: Started sshd@22-172.31.23.5:22-68.220.241.50:41124.service - OpenSSH per-connection server daemon (68.220.241.50:41124). Jan 17 00:02:40.038846 sshd[5174]: Accepted publickey for core from 68.220.241.50 port 41124 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:40.042316 sshd[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:40.049531 systemd-logind[2092]: New session 23 of user core. Jan 17 00:02:40.057613 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:02:40.535354 sshd[5174]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:40.543161 systemd[1]: sshd@22-172.31.23.5:22-68.220.241.50:41124.service: Deactivated successfully. Jan 17 00:02:40.548471 systemd-logind[2092]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:02:40.549488 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:02:40.556194 systemd-logind[2092]: Removed session 23. Jan 17 00:02:40.630484 systemd[1]: Started sshd@23-172.31.23.5:22-68.220.241.50:41136.service - OpenSSH per-connection server daemon (68.220.241.50:41136). Jan 17 00:02:41.173506 sshd[5188]: Accepted publickey for core from 68.220.241.50 port 41136 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:41.178789 sshd[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:41.188469 systemd-logind[2092]: New session 24 of user core. Jan 17 00:02:41.196633 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:02:44.522909 containerd[2133]: time="2026-01-17T00:02:44.522851868Z" level=info msg="StopContainer for \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\" with timeout 30 (s)" Jan 17 00:02:44.526428 containerd[2133]: time="2026-01-17T00:02:44.525752544Z" level=info msg="Stop container \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\" with signal terminated" Jan 17 00:02:44.591683 containerd[2133]: time="2026-01-17T00:02:44.591385464Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:02:44.608534 containerd[2133]: time="2026-01-17T00:02:44.608463756Z" level=info msg="StopContainer for \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\" with timeout 2 (s)" Jan 17 00:02:44.610576 containerd[2133]: time="2026-01-17T00:02:44.610314876Z" level=info msg="Stop container \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\" with signal terminated" Jan 17 00:02:44.638692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13-rootfs.mount: Deactivated successfully. Jan 17 00:02:44.641812 systemd-networkd[1690]: lxc_health: Link DOWN Jan 17 00:02:44.641838 systemd-networkd[1690]: lxc_health: Lost carrier Jan 17 00:02:44.670658 containerd[2133]: time="2026-01-17T00:02:44.670557108Z" level=info msg="shim disconnected" id=085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13 namespace=k8s.io Jan 17 00:02:44.670887 containerd[2133]: time="2026-01-17T00:02:44.670654752Z" level=warning msg="cleaning up after shim disconnected" id=085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13 namespace=k8s.io Jan 17 00:02:44.670887 containerd[2133]: time="2026-01-17T00:02:44.670681788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:44.710475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08-rootfs.mount: Deactivated successfully. Jan 17 00:02:44.719149 containerd[2133]: time="2026-01-17T00:02:44.718854864Z" level=info msg="shim disconnected" id=958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08 namespace=k8s.io Jan 17 00:02:44.719386 containerd[2133]: time="2026-01-17T00:02:44.719257572Z" level=warning msg="cleaning up after shim disconnected" id=958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08 namespace=k8s.io Jan 17 00:02:44.719386 containerd[2133]: time="2026-01-17T00:02:44.719289108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:44.727043 containerd[2133]: time="2026-01-17T00:02:44.726245893Z" level=info msg="StopContainer for \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\" returns successfully" Jan 17 00:02:44.727651 containerd[2133]: time="2026-01-17T00:02:44.727592005Z" level=info msg="StopPodSandbox for \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\"" Jan 17 00:02:44.727758 containerd[2133]: time="2026-01-17T00:02:44.727666285Z" level=info msg="Container to stop \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:02:44.732654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640-shm.mount: Deactivated successfully. Jan 17 00:02:44.764226 containerd[2133]: time="2026-01-17T00:02:44.763973017Z" level=info msg="StopContainer for \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\" returns successfully" Jan 17 00:02:44.765047 containerd[2133]: time="2026-01-17T00:02:44.764842777Z" level=info msg="StopPodSandbox for \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\"" Jan 17 00:02:44.765047 containerd[2133]: time="2026-01-17T00:02:44.764911861Z" level=info msg="Container to stop \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:02:44.765047 containerd[2133]: time="2026-01-17T00:02:44.764939065Z" level=info msg="Container to stop \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:02:44.765047 containerd[2133]: time="2026-01-17T00:02:44.764964181Z" level=info msg="Container to stop \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:02:44.765047 containerd[2133]: time="2026-01-17T00:02:44.764989885Z" level=info msg="Container to stop \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:02:44.765832 containerd[2133]: time="2026-01-17T00:02:44.765051265Z" level=info msg="Container to stop \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:02:44.822748 containerd[2133]: time="2026-01-17T00:02:44.822353569Z" level=info msg="shim disconnected" id=7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640 namespace=k8s.io Jan 17 00:02:44.822748 containerd[2133]: time="2026-01-17T00:02:44.822425833Z" level=warning msg="cleaning up after shim disconnected" id=7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640 namespace=k8s.io Jan 17 00:02:44.822748 containerd[2133]: time="2026-01-17T00:02:44.822446041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:44.848534 containerd[2133]: time="2026-01-17T00:02:44.848351929Z" level=info msg="shim disconnected" id=9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28 namespace=k8s.io Jan 17 00:02:44.848534 containerd[2133]: time="2026-01-17T00:02:44.848430217Z" level=warning msg="cleaning up after shim disconnected" id=9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28 namespace=k8s.io Jan 17 00:02:44.848534 containerd[2133]: time="2026-01-17T00:02:44.848453593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:44.867706 containerd[2133]: time="2026-01-17T00:02:44.867330445Z" level=info msg="TearDown network for sandbox \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" successfully" Jan 17 00:02:44.867706 containerd[2133]: time="2026-01-17T00:02:44.867406993Z" level=info msg="StopPodSandbox for \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" returns successfully" Jan 17 00:02:44.886054 containerd[2133]: time="2026-01-17T00:02:44.885719881Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:02:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:02:44.890794 containerd[2133]: time="2026-01-17T00:02:44.890714977Z" level=info msg="TearDown network for sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" successfully" Jan 17 00:02:44.890794 containerd[2133]: time="2026-01-17T00:02:44.890770705Z" level=info msg="StopPodSandbox for \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" returns successfully" Jan 17 00:02:45.066541 kubelet[3579]: I0117 00:02:45.065449 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-xtables-lock\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.066541 kubelet[3579]: I0117 00:02:45.065531 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-etc-cni-netd\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.066541 kubelet[3579]: I0117 00:02:45.065585 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-run\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.066541 kubelet[3579]: I0117 00:02:45.065628 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b307e59c-a025-4721-9aa6-e880c508ca8b-clustermesh-secrets\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.066541 kubelet[3579]: I0117 00:02:45.065580 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.066541 kubelet[3579]: I0117 00:02:45.065676 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-hubble-tls\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.067478 kubelet[3579]: I0117 00:02:45.065617 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.067478 kubelet[3579]: I0117 00:02:45.065709 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-bpf-maps\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.067478 kubelet[3579]: I0117 00:02:45.065744 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-hostproc\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.067478 kubelet[3579]: I0117 00:02:45.065782 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65z2l\" (UniqueName: \"kubernetes.io/projected/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-kube-api-access-65z2l\") pod \"6de5e380-98a4-42c5-8b59-6b5d3a3c5436\" (UID: \"6de5e380-98a4-42c5-8b59-6b5d3a3c5436\") " Jan 17 00:02:45.067478 kubelet[3579]: I0117 00:02:45.065821 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-kernel\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.067478 kubelet[3579]: I0117 00:02:45.065854 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-lib-modules\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.068126 kubelet[3579]: I0117 00:02:45.065886 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-cgroup\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.068126 kubelet[3579]: I0117 00:02:45.065922 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cni-path\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.068126 kubelet[3579]: I0117 00:02:45.065956 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-net\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.068126 kubelet[3579]: I0117 00:02:45.065996 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-config-path\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.068126 kubelet[3579]: I0117 00:02:45.066069 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r78sx\" (UniqueName: \"kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-kube-api-access-r78sx\") pod \"b307e59c-a025-4721-9aa6-e880c508ca8b\" (UID: \"b307e59c-a025-4721-9aa6-e880c508ca8b\") " Jan 17 00:02:45.068126 kubelet[3579]: I0117 00:02:45.066108 3579 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-cilium-config-path\") pod \"6de5e380-98a4-42c5-8b59-6b5d3a3c5436\" (UID: \"6de5e380-98a4-42c5-8b59-6b5d3a3c5436\") " Jan 17 00:02:45.068458 kubelet[3579]: I0117 00:02:45.066226 3579 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-xtables-lock\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.068458 kubelet[3579]: I0117 00:02:45.066252 3579 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-etc-cni-netd\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.071345 kubelet[3579]: I0117 00:02:45.065642 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.071345 kubelet[3579]: I0117 00:02:45.068653 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.071345 kubelet[3579]: I0117 00:02:45.068727 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.071345 kubelet[3579]: I0117 00:02:45.068755 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cni-path" (OuterVolumeSpecName: "cni-path") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.071345 kubelet[3579]: I0117 00:02:45.068812 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.071713 kubelet[3579]: I0117 00:02:45.067778 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.071713 kubelet[3579]: I0117 00:02:45.070694 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.071713 kubelet[3579]: I0117 00:02:45.070848 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-hostproc" (OuterVolumeSpecName: "hostproc") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:02:45.078576 kubelet[3579]: I0117 00:02:45.078488 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b307e59c-a025-4721-9aa6-e880c508ca8b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:02:45.079084 kubelet[3579]: I0117 00:02:45.079048 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-kube-api-access-65z2l" (OuterVolumeSpecName: "kube-api-access-65z2l") pod "6de5e380-98a4-42c5-8b59-6b5d3a3c5436" (UID: "6de5e380-98a4-42c5-8b59-6b5d3a3c5436"). InnerVolumeSpecName "kube-api-access-65z2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:02:45.082571 kubelet[3579]: I0117 00:02:45.082487 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:02:45.087640 kubelet[3579]: I0117 00:02:45.087144 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-kube-api-access-r78sx" (OuterVolumeSpecName: "kube-api-access-r78sx") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "kube-api-access-r78sx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:02:45.088593 kubelet[3579]: I0117 00:02:45.088445 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6de5e380-98a4-42c5-8b59-6b5d3a3c5436" (UID: "6de5e380-98a4-42c5-8b59-6b5d3a3c5436"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:02:45.090193 kubelet[3579]: I0117 00:02:45.090062 3579 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b307e59c-a025-4721-9aa6-e880c508ca8b" (UID: "b307e59c-a025-4721-9aa6-e880c508ca8b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:02:45.167175 kubelet[3579]: I0117 00:02:45.167115 3579 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cni-path\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167175 kubelet[3579]: I0117 00:02:45.167168 3579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-net\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167196 3579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r78sx\" (UniqueName: \"kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-kube-api-access-r78sx\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167226 3579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-cilium-config-path\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167248 3579 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-config-path\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167268 3579 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b307e59c-a025-4721-9aa6-e880c508ca8b-clustermesh-secrets\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167292 3579 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-run\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167316 3579 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b307e59c-a025-4721-9aa6-e880c508ca8b-hubble-tls\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167336 3579 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-bpf-maps\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167385 kubelet[3579]: I0117 00:02:45.167356 3579 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-hostproc\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167795 kubelet[3579]: I0117 00:02:45.167377 3579 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-65z2l\" (UniqueName: \"kubernetes.io/projected/6de5e380-98a4-42c5-8b59-6b5d3a3c5436-kube-api-access-65z2l\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167795 kubelet[3579]: I0117 00:02:45.167397 3579 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-host-proc-sys-kernel\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167795 kubelet[3579]: I0117 00:02:45.167417 3579 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-lib-modules\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.167795 kubelet[3579]: I0117 00:02:45.167455 3579 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b307e59c-a025-4721-9aa6-e880c508ca8b-cilium-cgroup\") on node \"ip-172-31-23-5\" DevicePath \"\"" Jan 17 00:02:45.542714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640-rootfs.mount: Deactivated successfully. Jan 17 00:02:45.543055 systemd[1]: var-lib-kubelet-pods-6de5e380\x2d98a4\x2d42c5\x2d8b59\x2d6b5d3a3c5436-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65z2l.mount: Deactivated successfully. Jan 17 00:02:45.543288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28-rootfs.mount: Deactivated successfully. Jan 17 00:02:45.543548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28-shm.mount: Deactivated successfully. Jan 17 00:02:45.543787 systemd[1]: var-lib-kubelet-pods-b307e59c\x2da025\x2d4721\x2d9aa6\x2de880c508ca8b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr78sx.mount: Deactivated successfully. Jan 17 00:02:45.544919 systemd[1]: var-lib-kubelet-pods-b307e59c\x2da025\x2d4721\x2d9aa6\x2de880c508ca8b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:02:45.545626 systemd[1]: var-lib-kubelet-pods-b307e59c\x2da025\x2d4721\x2d9aa6\x2de880c508ca8b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:02:45.698559 kubelet[3579]: I0117 00:02:45.698513 3579 scope.go:117] "RemoveContainer" containerID="958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08" Jan 17 00:02:45.705185 containerd[2133]: time="2026-01-17T00:02:45.705063397Z" level=info msg="RemoveContainer for \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\"" Jan 17 00:02:45.723279 containerd[2133]: time="2026-01-17T00:02:45.721068937Z" level=info msg="RemoveContainer for \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\" returns successfully" Jan 17 00:02:45.723866 kubelet[3579]: I0117 00:02:45.723629 3579 scope.go:117] "RemoveContainer" containerID="94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda" Jan 17 00:02:45.732334 containerd[2133]: time="2026-01-17T00:02:45.730519381Z" level=info msg="RemoveContainer for \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\"" Jan 17 00:02:45.744333 containerd[2133]: time="2026-01-17T00:02:45.744274610Z" level=info msg="RemoveContainer for \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\" returns successfully" Jan 17 00:02:45.746685 kubelet[3579]: I0117 00:02:45.745876 3579 scope.go:117] "RemoveContainer" containerID="0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33" Jan 17 00:02:45.752322 containerd[2133]: time="2026-01-17T00:02:45.752254442Z" level=info msg="RemoveContainer for \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\"" Jan 17 00:02:45.761194 containerd[2133]: time="2026-01-17T00:02:45.759934658Z" level=info msg="RemoveContainer for \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\" returns successfully" Jan 17 00:02:45.761821 kubelet[3579]: I0117 00:02:45.761770 3579 scope.go:117] "RemoveContainer" containerID="806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0" Jan 17 00:02:45.764313 containerd[2133]: time="2026-01-17T00:02:45.764254994Z" level=info msg="RemoveContainer for \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\"" Jan 17 00:02:45.770445 containerd[2133]: time="2026-01-17T00:02:45.770284238Z" level=info msg="RemoveContainer for \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\" returns successfully" Jan 17 00:02:45.770695 kubelet[3579]: I0117 00:02:45.770656 3579 scope.go:117] "RemoveContainer" containerID="f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe" Jan 17 00:02:45.773580 containerd[2133]: time="2026-01-17T00:02:45.773528294Z" level=info msg="RemoveContainer for \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\"" Jan 17 00:02:45.779717 containerd[2133]: time="2026-01-17T00:02:45.779616746Z" level=info msg="RemoveContainer for \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\" returns successfully" Jan 17 00:02:45.779956 kubelet[3579]: I0117 00:02:45.779902 3579 scope.go:117] "RemoveContainer" containerID="958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08" Jan 17 00:02:45.780394 containerd[2133]: time="2026-01-17T00:02:45.780323198Z" level=error msg="ContainerStatus for \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\": not found" Jan 17 00:02:45.780752 kubelet[3579]: E0117 00:02:45.780716 3579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\": not found" containerID="958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08" Jan 17 00:02:45.780993 kubelet[3579]: I0117 00:02:45.780845 3579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08"} err="failed to get container status \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"958f784f4731568578f97d7a29b6c2ef90e0665a53786fbe26a6c95c5ca51a08\": not found" Jan 17 00:02:45.781112 kubelet[3579]: I0117 00:02:45.781052 3579 scope.go:117] "RemoveContainer" containerID="94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda" Jan 17 00:02:45.781708 containerd[2133]: time="2026-01-17T00:02:45.781582658Z" level=error msg="ContainerStatus for \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\": not found" Jan 17 00:02:45.781905 kubelet[3579]: E0117 00:02:45.781862 3579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\": not found" containerID="94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda" Jan 17 00:02:45.781983 kubelet[3579]: I0117 00:02:45.781915 3579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda"} err="failed to get container status \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\": rpc error: code = NotFound desc = an error occurred when try to find container \"94a41c2fdb42a3c5b4a7588dc1d420b05c26b04bd4462a39c2da08cfc70c9fda\": not found" Jan 17 00:02:45.781983 kubelet[3579]: I0117 00:02:45.781971 3579 scope.go:117] "RemoveContainer" containerID="0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33" Jan 17 00:02:45.782475 containerd[2133]: time="2026-01-17T00:02:45.782420654Z" level=error msg="ContainerStatus for \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\": not found" Jan 17 00:02:45.782668 kubelet[3579]: E0117 00:02:45.782630 3579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\": not found" containerID="0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33" Jan 17 00:02:45.782732 kubelet[3579]: I0117 00:02:45.782677 3579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33"} err="failed to get container status \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a1bd3f3e36d19d7465977ec07694f62a554f03cb4f9bc94b33b79a197e22a33\": not found" Jan 17 00:02:45.782732 kubelet[3579]: I0117 00:02:45.782708 3579 scope.go:117] "RemoveContainer" containerID="806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0" Jan 17 00:02:45.783019 containerd[2133]: time="2026-01-17T00:02:45.782960042Z" level=error msg="ContainerStatus for \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\": not found" Jan 17 00:02:45.783314 kubelet[3579]: E0117 00:02:45.783276 3579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\": not found" containerID="806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0" Jan 17 00:02:45.783386 kubelet[3579]: I0117 00:02:45.783339 3579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0"} err="failed to get container status \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"806ec97a6df03b151c2af6a151d7b943ab8c13f8539531a9d4101c2a91eeb7c0\": not found" Jan 17 00:02:45.783386 kubelet[3579]: I0117 00:02:45.783372 3579 scope.go:117] "RemoveContainer" containerID="f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe" Jan 17 00:02:45.783974 containerd[2133]: time="2026-01-17T00:02:45.783919982Z" level=error msg="ContainerStatus for \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\": not found" Jan 17 00:02:45.784354 kubelet[3579]: E0117 00:02:45.784243 3579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\": not found" containerID="f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe" Jan 17 00:02:45.784447 kubelet[3579]: I0117 00:02:45.784362 3579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe"} err="failed to get container status \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\": rpc error: code = NotFound desc = an error occurred when try to find container \"f07ca377f2ef76fcd3b3e1b34add9774bec9666157a56feca52a327d2d4e6ebe\": not found" Jan 17 00:02:45.784447 kubelet[3579]: I0117 00:02:45.784418 3579 scope.go:117] "RemoveContainer" containerID="085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13" Jan 17 00:02:45.786588 containerd[2133]: time="2026-01-17T00:02:45.786543782Z" level=info msg="RemoveContainer for \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\"" Jan 17 00:02:45.792770 containerd[2133]: time="2026-01-17T00:02:45.792612410Z" level=info msg="RemoveContainer for \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\" returns successfully" Jan 17 00:02:45.793127 kubelet[3579]: I0117 00:02:45.792989 3579 scope.go:117] "RemoveContainer" containerID="085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13" Jan 17 00:02:45.795051 containerd[2133]: time="2026-01-17T00:02:45.794954486Z" level=error msg="ContainerStatus for \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\": not found" Jan 17 00:02:45.795648 kubelet[3579]: E0117 00:02:45.795486 3579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\": not found" containerID="085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13" Jan 17 00:02:45.795740 kubelet[3579]: I0117 00:02:45.795667 3579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13"} err="failed to get container status \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\": rpc error: code = NotFound desc = an error occurred when try to find container \"085ee28c5bad6d0432d4b04b5d10d18a1203af39a9aa85da9b46ed9649439e13\": not found" Jan 17 00:02:46.306081 update_engine[2095]: I20260117 00:02:46.305079 2095 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:02:46.306081 update_engine[2095]: I20260117 00:02:46.305146 2095 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:02:46.306081 update_engine[2095]: I20260117 00:02:46.305588 2095 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:02:46.306766 update_engine[2095]: I20260117 00:02:46.306488 2095 omaha_request_params.cc:62] Current group set to lts Jan 17 00:02:46.306766 update_engine[2095]: I20260117 00:02:46.306643 2095 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:02:46.306766 update_engine[2095]: I20260117 00:02:46.306666 2095 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:02:46.306766 update_engine[2095]: I20260117 00:02:46.306700 2095 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:02:46.306766 update_engine[2095]: I20260117 00:02:46.306756 2095 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:02:46.306987 update_engine[2095]: I20260117 00:02:46.306859 2095 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:02:46.306987 update_engine[2095]: I20260117 00:02:46.306877 2095 omaha_request_action.cc:272] Request: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: Jan 17 00:02:46.306987 update_engine[2095]: I20260117 00:02:46.306895 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:02:46.307913 locksmithd[2150]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:02:46.310395 update_engine[2095]: I20260117 00:02:46.310322 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:02:46.311046 update_engine[2095]: I20260117 00:02:46.310949 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:02:46.340929 update_engine[2095]: E20260117 00:02:46.340853 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:02:46.341085 update_engine[2095]: I20260117 00:02:46.340975 2095 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:02:46.508888 sshd[5188]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:46.514843 systemd-logind[2092]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:02:46.516667 systemd[1]: sshd@23-172.31.23.5:22-68.220.241.50:41136.service: Deactivated successfully. Jan 17 00:02:46.523682 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:02:46.526242 systemd-logind[2092]: Removed session 24. Jan 17 00:02:46.601608 systemd[1]: Started sshd@24-172.31.23.5:22-68.220.241.50:58406.service - OpenSSH per-connection server daemon (68.220.241.50:58406). Jan 17 00:02:47.125235 ntpd[2083]: Deleting interface #10 lxc_health, fe80::78d9:5cff:fe15:b366%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jan 17 00:02:47.125897 ntpd[2083]: 17 Jan 00:02:47 ntpd[2083]: Deleting interface #10 lxc_health, fe80::78d9:5cff:fe15:b366%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jan 17 00:02:47.133830 sshd[5354]: Accepted publickey for core from 68.220.241.50 port 58406 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:47.136467 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:47.145295 systemd-logind[2092]: New session 25 of user core. Jan 17 00:02:47.151781 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:02:47.178503 kubelet[3579]: I0117 00:02:47.178453 3579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6de5e380-98a4-42c5-8b59-6b5d3a3c5436" path="/var/lib/kubelet/pods/6de5e380-98a4-42c5-8b59-6b5d3a3c5436/volumes" Jan 17 00:02:47.179527 kubelet[3579]: I0117 00:02:47.179474 3579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b307e59c-a025-4721-9aa6-e880c508ca8b" path="/var/lib/kubelet/pods/b307e59c-a025-4721-9aa6-e880c508ca8b/volumes" Jan 17 00:02:48.384817 kubelet[3579]: E0117 00:02:48.384750 3579 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:02:48.746694 kubelet[3579]: I0117 00:02:48.746491 3579 memory_manager.go:355] "RemoveStaleState removing state" podUID="6de5e380-98a4-42c5-8b59-6b5d3a3c5436" containerName="cilium-operator" Jan 17 00:02:48.746694 kubelet[3579]: I0117 00:02:48.746545 3579 memory_manager.go:355] "RemoveStaleState removing state" podUID="b307e59c-a025-4721-9aa6-e880c508ca8b" containerName="cilium-agent" Jan 17 00:02:48.771352 sshd[5354]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:48.785644 systemd[1]: sshd@24-172.31.23.5:22-68.220.241.50:58406.service: Deactivated successfully. Jan 17 00:02:48.801176 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:02:48.811130 systemd-logind[2092]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:02:48.816572 systemd-logind[2092]: Removed session 25. Jan 17 00:02:48.851763 systemd[1]: Started sshd@25-172.31.23.5:22-68.220.241.50:58418.service - OpenSSH per-connection server daemon (68.220.241.50:58418). Jan 17 00:02:48.893632 kubelet[3579]: I0117 00:02:48.893567 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-bpf-maps\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894073 kubelet[3579]: I0117 00:02:48.893646 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-cni-path\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894073 kubelet[3579]: I0117 00:02:48.893686 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0447aa43-5304-4921-abad-0c1faa50b8ac-cilium-ipsec-secrets\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894073 kubelet[3579]: I0117 00:02:48.893728 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-cilium-run\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894073 kubelet[3579]: I0117 00:02:48.893773 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-xtables-lock\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894073 kubelet[3579]: I0117 00:02:48.893811 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-cilium-cgroup\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894073 kubelet[3579]: I0117 00:02:48.893908 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-lib-modules\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894729 kubelet[3579]: I0117 00:02:48.894433 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-host-proc-sys-kernel\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894729 kubelet[3579]: I0117 00:02:48.894531 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-etc-cni-netd\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894729 kubelet[3579]: I0117 00:02:48.894599 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-host-proc-sys-net\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894729 kubelet[3579]: I0117 00:02:48.894642 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrz2x\" (UniqueName: \"kubernetes.io/projected/0447aa43-5304-4921-abad-0c1faa50b8ac-kube-api-access-hrz2x\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.894729 kubelet[3579]: I0117 00:02:48.894698 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0447aa43-5304-4921-abad-0c1faa50b8ac-clustermesh-secrets\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.895211 kubelet[3579]: I0117 00:02:48.894749 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0447aa43-5304-4921-abad-0c1faa50b8ac-cilium-config-path\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.895211 kubelet[3579]: I0117 00:02:48.894786 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0447aa43-5304-4921-abad-0c1faa50b8ac-hubble-tls\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:48.895211 kubelet[3579]: I0117 00:02:48.894833 3579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0447aa43-5304-4921-abad-0c1faa50b8ac-hostproc\") pod \"cilium-n85zl\" (UID: \"0447aa43-5304-4921-abad-0c1faa50b8ac\") " pod="kube-system/cilium-n85zl" Jan 17 00:02:49.071164 containerd[2133]: time="2026-01-17T00:02:49.070977614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n85zl,Uid:0447aa43-5304-4921-abad-0c1faa50b8ac,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:49.118519 containerd[2133]: time="2026-01-17T00:02:49.118069166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:49.118519 containerd[2133]: time="2026-01-17T00:02:49.118160186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:49.118519 containerd[2133]: time="2026-01-17T00:02:49.118186754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:49.118519 containerd[2133]: time="2026-01-17T00:02:49.118359074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:49.186931 containerd[2133]: time="2026-01-17T00:02:49.186857523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n85zl,Uid:0447aa43-5304-4921-abad-0c1faa50b8ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\"" Jan 17 00:02:49.193447 containerd[2133]: time="2026-01-17T00:02:49.193380615Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:02:49.216084 containerd[2133]: time="2026-01-17T00:02:49.215989683Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d6f9be7d92f3810a632262538345b657d0667019414a6963d8c67e23f80938e8\"" Jan 17 00:02:49.217956 containerd[2133]: time="2026-01-17T00:02:49.217234251Z" level=info msg="StartContainer for \"d6f9be7d92f3810a632262538345b657d0667019414a6963d8c67e23f80938e8\"" Jan 17 00:02:49.327759 containerd[2133]: time="2026-01-17T00:02:49.327680271Z" level=info msg="StartContainer for \"d6f9be7d92f3810a632262538345b657d0667019414a6963d8c67e23f80938e8\" returns successfully" Jan 17 00:02:49.368835 sshd[5367]: Accepted publickey for core from 68.220.241.50 port 58418 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:49.372439 sshd[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:49.383108 systemd-logind[2092]: New session 26 of user core. Jan 17 00:02:49.394564 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:02:49.419614 containerd[2133]: time="2026-01-17T00:02:49.419527312Z" level=info msg="shim disconnected" id=d6f9be7d92f3810a632262538345b657d0667019414a6963d8c67e23f80938e8 namespace=k8s.io Jan 17 00:02:49.419614 containerd[2133]: time="2026-01-17T00:02:49.419605444Z" level=warning msg="cleaning up after shim disconnected" id=d6f9be7d92f3810a632262538345b657d0667019414a6963d8c67e23f80938e8 namespace=k8s.io Jan 17 00:02:49.420285 containerd[2133]: time="2026-01-17T00:02:49.419628556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:49.724494 sshd[5367]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:49.735094 systemd[1]: sshd@25-172.31.23.5:22-68.220.241.50:58418.service: Deactivated successfully. Jan 17 00:02:49.742746 containerd[2133]: time="2026-01-17T00:02:49.741244973Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:02:49.747902 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:02:49.753898 systemd-logind[2092]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:02:49.759097 systemd-logind[2092]: Removed session 26. Jan 17 00:02:49.778782 containerd[2133]: time="2026-01-17T00:02:49.778031106Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a170f31e0b660732c331a992fe351a3abb2c3bd7f95322c1cba8149922c3c043\"" Jan 17 00:02:49.779576 containerd[2133]: time="2026-01-17T00:02:49.779081106Z" level=info msg="StartContainer for \"a170f31e0b660732c331a992fe351a3abb2c3bd7f95322c1cba8149922c3c043\"" Jan 17 00:02:49.812096 systemd[1]: Started sshd@26-172.31.23.5:22-68.220.241.50:58432.service - OpenSSH per-connection server daemon (68.220.241.50:58432). Jan 17 00:02:49.883112 containerd[2133]: time="2026-01-17T00:02:49.883042842Z" level=info msg="StartContainer for \"a170f31e0b660732c331a992fe351a3abb2c3bd7f95322c1cba8149922c3c043\" returns successfully" Jan 17 00:02:49.939403 containerd[2133]: time="2026-01-17T00:02:49.939324354Z" level=info msg="shim disconnected" id=a170f31e0b660732c331a992fe351a3abb2c3bd7f95322c1cba8149922c3c043 namespace=k8s.io Jan 17 00:02:49.939403 containerd[2133]: time="2026-01-17T00:02:49.939399378Z" level=warning msg="cleaning up after shim disconnected" id=a170f31e0b660732c331a992fe351a3abb2c3bd7f95322c1cba8149922c3c043 namespace=k8s.io Jan 17 00:02:49.939713 containerd[2133]: time="2026-01-17T00:02:49.939440538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:50.335185 sshd[5499]: Accepted publickey for core from 68.220.241.50 port 58432 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:50.337822 sshd[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:50.346600 systemd-logind[2092]: New session 27 of user core. Jan 17 00:02:50.351518 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:02:50.745180 containerd[2133]: time="2026-01-17T00:02:50.744516978Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:02:50.783888 containerd[2133]: time="2026-01-17T00:02:50.779129875Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c2042c645fc1faef3f3ba96ff3f4162db73ac8c44740aac469e45627e3548c4\"" Jan 17 00:02:50.783888 containerd[2133]: time="2026-01-17T00:02:50.780631039Z" level=info msg="StartContainer for \"0c2042c645fc1faef3f3ba96ff3f4162db73ac8c44740aac469e45627e3548c4\"" Jan 17 00:02:50.904103 containerd[2133]: time="2026-01-17T00:02:50.903752767Z" level=info msg="StartContainer for \"0c2042c645fc1faef3f3ba96ff3f4162db73ac8c44740aac469e45627e3548c4\" returns successfully" Jan 17 00:02:50.957492 containerd[2133]: time="2026-01-17T00:02:50.957413455Z" level=info msg="shim disconnected" id=0c2042c645fc1faef3f3ba96ff3f4162db73ac8c44740aac469e45627e3548c4 namespace=k8s.io Jan 17 00:02:50.957492 containerd[2133]: time="2026-01-17T00:02:50.957489967Z" level=warning msg="cleaning up after shim disconnected" id=0c2042c645fc1faef3f3ba96ff3f4162db73ac8c44740aac469e45627e3548c4 namespace=k8s.io Jan 17 00:02:50.957869 containerd[2133]: time="2026-01-17T00:02:50.957512467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:51.009533 systemd[1]: run-containerd-runc-k8s.io-0c2042c645fc1faef3f3ba96ff3f4162db73ac8c44740aac469e45627e3548c4-runc.0eE0Yx.mount: Deactivated successfully. Jan 17 00:02:51.010383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c2042c645fc1faef3f3ba96ff3f4162db73ac8c44740aac469e45627e3548c4-rootfs.mount: Deactivated successfully. Jan 17 00:02:51.762991 containerd[2133]: time="2026-01-17T00:02:51.762746155Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:02:51.799453 containerd[2133]: time="2026-01-17T00:02:51.797462552Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b8f5e167419578b1ba01a88c401553602b1f3834138d85e374e70104cab4d65\"" Jan 17 00:02:51.801387 containerd[2133]: time="2026-01-17T00:02:51.801214520Z" level=info msg="StartContainer for \"2b8f5e167419578b1ba01a88c401553602b1f3834138d85e374e70104cab4d65\"" Jan 17 00:02:51.929114 containerd[2133]: time="2026-01-17T00:02:51.928700924Z" level=info msg="StartContainer for \"2b8f5e167419578b1ba01a88c401553602b1f3834138d85e374e70104cab4d65\" returns successfully" Jan 17 00:02:51.992796 containerd[2133]: time="2026-01-17T00:02:51.992709285Z" level=info msg="shim disconnected" id=2b8f5e167419578b1ba01a88c401553602b1f3834138d85e374e70104cab4d65 namespace=k8s.io Jan 17 00:02:51.992796 containerd[2133]: time="2026-01-17T00:02:51.992793645Z" level=warning msg="cleaning up after shim disconnected" id=2b8f5e167419578b1ba01a88c401553602b1f3834138d85e374e70104cab4d65 namespace=k8s.io Jan 17 00:02:51.992796 containerd[2133]: time="2026-01-17T00:02:51.992817045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:52.012888 systemd[1]: run-containerd-runc-k8s.io-2b8f5e167419578b1ba01a88c401553602b1f3834138d85e374e70104cab4d65-runc.md6S1C.mount: Deactivated successfully. Jan 17 00:02:52.013376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b8f5e167419578b1ba01a88c401553602b1f3834138d85e374e70104cab4d65-rootfs.mount: Deactivated successfully. Jan 17 00:02:52.771964 containerd[2133]: time="2026-01-17T00:02:52.771831668Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:02:52.818202 containerd[2133]: time="2026-01-17T00:02:52.817968381Z" level=info msg="CreateContainer within sandbox \"29e93a5816be667938bc536f241ca35c7f26eb2bdf6e2085b75039b6a1386a9c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af642ec9cb6c9f9809202f8fe1851421409d9ba2c176512ccd357751bf185a62\"" Jan 17 00:02:52.823798 containerd[2133]: time="2026-01-17T00:02:52.823625181Z" level=info msg="StartContainer for \"af642ec9cb6c9f9809202f8fe1851421409d9ba2c176512ccd357751bf185a62\"" Jan 17 00:02:52.824479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477738697.mount: Deactivated successfully. Jan 17 00:02:52.948394 containerd[2133]: time="2026-01-17T00:02:52.948043125Z" level=info msg="StartContainer for \"af642ec9cb6c9f9809202f8fe1851421409d9ba2c176512ccd357751bf185a62\" returns successfully" Jan 17 00:02:53.200743 containerd[2133]: time="2026-01-17T00:02:53.200673895Z" level=info msg="StopPodSandbox for \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\"" Jan 17 00:02:53.201167 containerd[2133]: time="2026-01-17T00:02:53.200895355Z" level=info msg="TearDown network for sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" successfully" Jan 17 00:02:53.201167 containerd[2133]: time="2026-01-17T00:02:53.200949835Z" level=info msg="StopPodSandbox for \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" returns successfully" Jan 17 00:02:53.204212 containerd[2133]: time="2026-01-17T00:02:53.202385227Z" level=info msg="RemovePodSandbox for \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\"" Jan 17 00:02:53.204212 containerd[2133]: time="2026-01-17T00:02:53.202470163Z" level=info msg="Forcibly stopping sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\"" Jan 17 00:02:53.204212 containerd[2133]: time="2026-01-17T00:02:53.202633555Z" level=info msg="TearDown network for sandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" successfully" Jan 17 00:02:53.211689 containerd[2133]: time="2026-01-17T00:02:53.211373599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:53.211689 containerd[2133]: time="2026-01-17T00:02:53.211555183Z" level=info msg="RemovePodSandbox \"9636fadbffa11e7e644aa84ee69d7438b3be34776b13ec6c4d59f515fe3e1b28\" returns successfully" Jan 17 00:02:53.212660 containerd[2133]: time="2026-01-17T00:02:53.212523799Z" level=info msg="StopPodSandbox for \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\"" Jan 17 00:02:53.212780 containerd[2133]: time="2026-01-17T00:02:53.212667487Z" level=info msg="TearDown network for sandbox \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" successfully" Jan 17 00:02:53.212780 containerd[2133]: time="2026-01-17T00:02:53.212692555Z" level=info msg="StopPodSandbox for \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" returns successfully" Jan 17 00:02:53.216116 containerd[2133]: time="2026-01-17T00:02:53.215622151Z" level=info msg="RemovePodSandbox for \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\"" Jan 17 00:02:53.216116 containerd[2133]: time="2026-01-17T00:02:53.215684263Z" level=info msg="Forcibly stopping sandbox \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\"" Jan 17 00:02:53.216116 containerd[2133]: time="2026-01-17T00:02:53.215788435Z" level=info msg="TearDown network for sandbox \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" successfully" Jan 17 00:02:53.224260 containerd[2133]: time="2026-01-17T00:02:53.223385359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:53.224260 containerd[2133]: time="2026-01-17T00:02:53.223546855Z" level=info msg="RemovePodSandbox \"7cc4e5079a1b71918bb37b4700b1b1af161b105103e80bbf0f47a38c688ca640\" returns successfully" Jan 17 00:02:53.855063 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 00:02:53.928980 kubelet[3579]: I0117 00:02:53.926852 3579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n85zl" podStartSLOduration=5.926828818 podStartE2EDuration="5.926828818s" podCreationTimestamp="2026-01-17 00:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:53.923127418 +0000 UTC m=+121.078902074" watchObservedRunningTime="2026-01-17 00:02:53.926828818 +0000 UTC m=+121.082603450" Jan 17 00:02:56.305166 update_engine[2095]: I20260117 00:02:56.305065 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:02:56.305759 update_engine[2095]: I20260117 00:02:56.305418 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:02:56.305759 update_engine[2095]: I20260117 00:02:56.305727 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:02:56.307652 update_engine[2095]: E20260117 00:02:56.307499 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:02:56.307652 update_engine[2095]: I20260117 00:02:56.307605 2095 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:02:58.241968 (udev-worker)[6228]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:02:58.247735 (udev-worker)[6229]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:02:58.254159 systemd-networkd[1690]: lxc_health: Link UP Jan 17 00:02:58.278671 systemd-networkd[1690]: lxc_health: Gained carrier Jan 17 00:02:59.986784 systemd-networkd[1690]: lxc_health: Gained IPv6LL Jan 17 00:03:01.886550 systemd[1]: run-containerd-runc-k8s.io-af642ec9cb6c9f9809202f8fe1851421409d9ba2c176512ccd357751bf185a62-runc.Ib4eyP.mount: Deactivated successfully. Jan 17 00:03:02.125296 ntpd[2083]: Listen normally on 13 lxc_health [fe80::3c29:ccff:fee8:96a6%14]:123 Jan 17 00:03:02.125858 ntpd[2083]: 17 Jan 00:03:02 ntpd[2083]: Listen normally on 13 lxc_health [fe80::3c29:ccff:fee8:96a6%14]:123 Jan 17 00:03:06.303402 update_engine[2095]: I20260117 00:03:06.302593 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:03:06.303402 update_engine[2095]: I20260117 00:03:06.302938 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:03:06.303402 update_engine[2095]: I20260117 00:03:06.303265 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:03:06.304810 update_engine[2095]: E20260117 00:03:06.304652 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:03:06.304810 update_engine[2095]: I20260117 00:03:06.304760 2095 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:03:06.617519 kubelet[3579]: E0117 00:03:06.617070 3579 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59824->127.0.0.1:45461: write tcp 127.0.0.1:59824->127.0.0.1:45461: write: broken pipe Jan 17 00:03:06.700598 sshd[5499]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:06.712367 systemd[1]: sshd@26-172.31.23.5:22-68.220.241.50:58432.service: Deactivated successfully. Jan 17 00:03:06.728164 systemd-logind[2092]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:03:06.729479 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:03:06.735030 systemd-logind[2092]: Removed session 27. Jan 17 00:03:16.308244 update_engine[2095]: I20260117 00:03:16.308048 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:03:16.309430 update_engine[2095]: I20260117 00:03:16.309092 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:03:16.309430 update_engine[2095]: I20260117 00:03:16.309367 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:03:16.311238 update_engine[2095]: E20260117 00:03:16.310066 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310142 2095 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310161 2095 omaha_request_action.cc:617] Omaha request response: Jan 17 00:03:16.311238 update_engine[2095]: E20260117 00:03:16.310271 2095 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310320 2095 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310341 2095 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310357 2095 update_attempter.cc:306] Processing Done. Jan 17 00:03:16.311238 update_engine[2095]: E20260117 00:03:16.310384 2095 update_attempter.cc:619] Update failed. Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310398 2095 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310416 2095 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310432 2095 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310536 2095 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310575 2095 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:03:16.311238 update_engine[2095]: I20260117 00:03:16.310611 2095 omaha_request_action.cc:272] Request: Jan 17 00:03:16.311238 update_engine[2095]: Jan 17 00:03:16.311238 update_engine[2095]: Jan 17 00:03:16.312346 update_engine[2095]: Jan 17 00:03:16.312346 update_engine[2095]: Jan 17 00:03:16.312346 update_engine[2095]: Jan 17 00:03:16.312346 update_engine[2095]: Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.310629 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.310873 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.311143 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:03:16.312346 update_engine[2095]: E20260117 00:03:16.312160 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.312247 2095 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.312265 2095 omaha_request_action.cc:617] Omaha request response: Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.312284 2095 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.312299 2095 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.312314 2095 update_attempter.cc:306] Processing Done. Jan 17 00:03:16.312346 update_engine[2095]: I20260117 00:03:16.312332 2095 update_attempter.cc:310] Error event sent. Jan 17 00:03:16.312984 locksmithd[2150]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:03:16.312984 locksmithd[2150]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:03:16.313563 update_engine[2095]: I20260117 00:03:16.312353 2095 update_check_scheduler.cc:74] Next update check in 49m21s Jan 17 00:03:20.722944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49cda2464739e4f3d75959b2e690be2872b9b42d3ea533c66438e872be6baa2b-rootfs.mount: Deactivated successfully. Jan 17 00:03:20.733487 containerd[2133]: time="2026-01-17T00:03:20.733326695Z" level=info msg="shim disconnected" id=49cda2464739e4f3d75959b2e690be2872b9b42d3ea533c66438e872be6baa2b namespace=k8s.io Jan 17 00:03:20.733487 containerd[2133]: time="2026-01-17T00:03:20.733435871Z" level=warning msg="cleaning up after shim disconnected" id=49cda2464739e4f3d75959b2e690be2872b9b42d3ea533c66438e872be6baa2b namespace=k8s.io Jan 17 00:03:20.734292 containerd[2133]: time="2026-01-17T00:03:20.733459151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:03:20.870127 kubelet[3579]: I0117 00:03:20.868726 3579 scope.go:117] "RemoveContainer" containerID="49cda2464739e4f3d75959b2e690be2872b9b42d3ea533c66438e872be6baa2b" Jan 17 00:03:20.874227 containerd[2133]: time="2026-01-17T00:03:20.874164336Z" level=info msg="CreateContainer within sandbox \"a10b84aba8c8edf4da029f7aa39f195f44813f21a34a6def94d63fb60ea31175\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:03:20.898438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355448404.mount: Deactivated successfully. Jan 17 00:03:20.900183 containerd[2133]: time="2026-01-17T00:03:20.900110388Z" level=info msg="CreateContainer within sandbox \"a10b84aba8c8edf4da029f7aa39f195f44813f21a34a6def94d63fb60ea31175\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cb9e322a25e59b94b294d8b02a8ef8045b19b630ce0a66586f9a1749baf2e08f\"" Jan 17 00:03:20.900854 containerd[2133]: time="2026-01-17T00:03:20.900805104Z" level=info msg="StartContainer for \"cb9e322a25e59b94b294d8b02a8ef8045b19b630ce0a66586f9a1749baf2e08f\"" Jan 17 00:03:21.027336 containerd[2133]: time="2026-01-17T00:03:21.027185037Z" level=info msg="StartContainer for \"cb9e322a25e59b94b294d8b02a8ef8045b19b630ce0a66586f9a1749baf2e08f\" returns successfully" Jan 17 00:03:25.298919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8c6a9c22daaf709556fb0d38e2f3bda48734851ecc67b0511dbc92999cabb6c-rootfs.mount: Deactivated successfully. Jan 17 00:03:25.308470 containerd[2133]: time="2026-01-17T00:03:25.308350742Z" level=info msg="shim disconnected" id=e8c6a9c22daaf709556fb0d38e2f3bda48734851ecc67b0511dbc92999cabb6c namespace=k8s.io Jan 17 00:03:25.308470 containerd[2133]: time="2026-01-17T00:03:25.308492798Z" level=warning msg="cleaning up after shim disconnected" id=e8c6a9c22daaf709556fb0d38e2f3bda48734851ecc67b0511dbc92999cabb6c namespace=k8s.io Jan 17 00:03:25.309615 containerd[2133]: time="2026-01-17T00:03:25.308516510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:03:25.890888 kubelet[3579]: I0117 00:03:25.890520 3579 scope.go:117] "RemoveContainer" containerID="e8c6a9c22daaf709556fb0d38e2f3bda48734851ecc67b0511dbc92999cabb6c" Jan 17 00:03:25.893567 containerd[2133]: time="2026-01-17T00:03:25.893503397Z" level=info msg="CreateContainer within sandbox \"89ab74e0c78a69a9e92d4719512294fb5b58fe0c2f78ec6e9f159549e7c8f9f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:03:25.920225 containerd[2133]: time="2026-01-17T00:03:25.920144513Z" level=info msg="CreateContainer within sandbox \"89ab74e0c78a69a9e92d4719512294fb5b58fe0c2f78ec6e9f159549e7c8f9f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8c99c6f3d6eef95a1198389ba498a3f7f3a2b8a06756fb1f977b5fbac9c6a642\"" Jan 17 00:03:25.920883 containerd[2133]: time="2026-01-17T00:03:25.920820953Z" level=info msg="StartContainer for \"8c99c6f3d6eef95a1198389ba498a3f7f3a2b8a06756fb1f977b5fbac9c6a642\"" Jan 17 00:03:26.042384 containerd[2133]: time="2026-01-17T00:03:26.042267866Z" level=info msg="StartContainer for \"8c99c6f3d6eef95a1198389ba498a3f7f3a2b8a06756fb1f977b5fbac9c6a642\" returns successfully" Jan 17 00:03:26.933562 kubelet[3579]: E0117 00:03:26.932841 3579 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-5?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"