Jan 23 17:57:02.137094 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 17:57:02.137136 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:57:02.137160 kernel: KASLR disabled due to lack of seed Jan 23 17:57:02.137177 kernel: efi: EFI v2.7 by EDK II Jan 23 17:57:02.137193 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Jan 23 17:57:02.137210 kernel: secureboot: Secure boot disabled Jan 23 17:57:02.137227 kernel: ACPI: Early table checksum verification disabled Jan 23 17:57:02.137242 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 17:57:02.137258 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 17:57:02.137274 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 17:57:02.137290 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 17:57:02.137310 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 17:57:02.137326 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 17:57:02.137342 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 17:57:02.137361 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 17:57:02.137377 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 17:57:02.137398 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 17:57:02.137416 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 17:57:02.137432 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 17:57:02.137448 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 17:57:02.137465 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 17:57:02.137481 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 17:57:02.137497 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:57:02.137513 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:57:02.137530 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 17:57:02.137546 kernel: Zone ranges: Jan 23 17:57:02.137562 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:57:02.137582 kernel: DMA32 empty Jan 23 17:57:02.137598 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 17:57:02.137614 kernel: Device empty Jan 23 17:57:02.137630 kernel: Movable zone start for each node Jan 23 17:57:02.137646 kernel: Early memory node ranges Jan 23 17:57:02.137662 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 17:57:02.137678 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 17:57:02.137694 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 17:57:02.137710 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 17:57:02.137726 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 17:57:02.137742 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 17:57:02.137758 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 17:57:02.137780 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 17:57:02.137803 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:57:02.137820 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 17:57:02.137837 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 17:57:02.139927 kernel: psci: probing for conduit method from ACPI. Jan 23 17:57:02.139969 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 17:57:02.139987 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:57:02.140009 kernel: psci: Trusted OS migration not required Jan 23 17:57:02.140027 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:57:02.140045 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 17:57:02.140063 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:57:02.140080 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:57:02.140099 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:57:02.140116 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:57:02.140134 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:57:02.140151 kernel: CPU features: detected: Spectre-v2 Jan 23 17:57:02.140172 kernel: CPU features: detected: Spectre-v3a Jan 23 17:57:02.140190 kernel: CPU features: detected: Spectre-BHB Jan 23 17:57:02.140207 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 17:57:02.140224 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 17:57:02.140242 kernel: alternatives: applying boot alternatives Jan 23 17:57:02.140262 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:57:02.140281 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:57:02.140299 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:57:02.140317 kernel: Fallback order for Node 0: 0 Jan 23 17:57:02.140335 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 17:57:02.140353 kernel: Policy zone: Normal Jan 23 17:57:02.140374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:57:02.140392 kernel: software IO TLB: area num 2. Jan 23 17:57:02.140409 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jan 23 17:57:02.140426 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:57:02.140443 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:57:02.140462 kernel: rcu: RCU event tracing is enabled. Jan 23 17:57:02.140479 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:57:02.140498 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:57:02.140517 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:57:02.140534 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:57:02.140552 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:57:02.140574 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:57:02.140592 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:57:02.140608 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:57:02.140625 kernel: GICv3: 96 SPIs implemented Jan 23 17:57:02.140642 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:57:02.140659 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:57:02.140676 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 17:57:02.140694 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:57:02.140711 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 17:57:02.140727 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 17:57:02.140745 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:57:02.140763 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:57:02.140785 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 17:57:02.140802 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 17:57:02.140818 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 17:57:02.140835 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:57:02.140883 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 17:57:02.140905 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 17:57:02.140923 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 17:57:02.140940 kernel: Console: colour dummy device 80x25 Jan 23 17:57:02.140958 kernel: printk: legacy console [tty1] enabled Jan 23 17:57:02.140976 kernel: ACPI: Core revision 20240827 Jan 23 17:57:02.140994 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 17:57:02.141019 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:57:02.141037 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:57:02.141055 kernel: landlock: Up and running. Jan 23 17:57:02.141072 kernel: SELinux: Initializing. Jan 23 17:57:02.141089 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:57:02.141107 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:57:02.141124 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:57:02.141142 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:57:02.141164 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:57:02.141182 kernel: Remapping and enabling EFI services. Jan 23 17:57:02.141200 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:57:02.141217 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:57:02.141234 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 17:57:02.141252 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 17:57:02.141269 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 17:57:02.141301 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:57:02.141322 kernel: SMP: Total of 2 processors activated. Jan 23 17:57:02.141345 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:57:02.141374 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:57:02.141393 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 17:57:02.141415 kernel: CPU features: detected: CRC32 instructions Jan 23 17:57:02.141433 kernel: alternatives: applying system-wide alternatives Jan 23 17:57:02.141452 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 23 17:57:02.141471 kernel: devtmpfs: initialized Jan 23 17:57:02.141489 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:57:02.141512 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:57:02.141530 kernel: 16880 pages in range for non-PLT usage Jan 23 17:57:02.141548 kernel: 508400 pages in range for PLT usage Jan 23 17:57:02.141566 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:57:02.141584 kernel: SMBIOS 3.0.0 present. Jan 23 17:57:02.141602 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 17:57:02.141620 kernel: DMI: Memory slots populated: 0/0 Jan 23 17:57:02.141638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:57:02.141656 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:57:02.141679 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:57:02.141697 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:57:02.141715 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:57:02.141733 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jan 23 17:57:02.141751 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:57:02.141769 kernel: cpuidle: using governor menu Jan 23 17:57:02.141787 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:57:02.141805 kernel: ASID allocator initialised with 65536 entries Jan 23 17:57:02.141823 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:57:02.141845 kernel: Serial: AMBA PL011 UART driver Jan 23 17:57:02.142943 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:57:02.142964 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:57:02.142983 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:57:02.143001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:57:02.143019 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:57:02.143037 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:57:02.143056 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:57:02.143074 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:57:02.143102 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:57:02.143121 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:57:02.143139 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:57:02.143157 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:57:02.143175 kernel: ACPI: Interpreter enabled Jan 23 17:57:02.143193 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:57:02.143233 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:57:02.143252 kernel: ACPI: CPU0 has been hot-added Jan 23 17:57:02.143270 kernel: ACPI: CPU1 has been hot-added Jan 23 17:57:02.143293 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 17:57:02.143587 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:57:02.143777 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:57:02.145080 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:57:02.145285 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 17:57:02.145472 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 17:57:02.145497 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 17:57:02.145525 kernel: acpiphp: Slot [1] registered Jan 23 17:57:02.145544 kernel: acpiphp: Slot [2] registered Jan 23 17:57:02.145562 kernel: acpiphp: Slot [3] registered Jan 23 17:57:02.145580 kernel: acpiphp: Slot [4] registered Jan 23 17:57:02.145598 kernel: acpiphp: Slot [5] registered Jan 23 17:57:02.145616 kernel: acpiphp: Slot [6] registered Jan 23 17:57:02.145634 kernel: acpiphp: Slot [7] registered Jan 23 17:57:02.145651 kernel: acpiphp: Slot [8] registered Jan 23 17:57:02.145669 kernel: acpiphp: Slot [9] registered Jan 23 17:57:02.145687 kernel: acpiphp: Slot [10] registered Jan 23 17:57:02.145709 kernel: acpiphp: Slot [11] registered Jan 23 17:57:02.145727 kernel: acpiphp: Slot [12] registered Jan 23 17:57:02.145746 kernel: acpiphp: Slot [13] registered Jan 23 17:57:02.145764 kernel: acpiphp: Slot [14] registered Jan 23 17:57:02.145782 kernel: acpiphp: Slot [15] registered Jan 23 17:57:02.145800 kernel: acpiphp: Slot [16] registered Jan 23 17:57:02.145818 kernel: acpiphp: Slot [17] registered Jan 23 17:57:02.145874 kernel: acpiphp: Slot [18] registered Jan 23 17:57:02.145908 kernel: acpiphp: Slot [19] registered Jan 23 17:57:02.145936 kernel: acpiphp: Slot [20] registered Jan 23 17:57:02.145955 kernel: acpiphp: Slot [21] registered Jan 23 17:57:02.145973 kernel: acpiphp: Slot [22] registered Jan 23 17:57:02.145991 kernel: acpiphp: Slot [23] registered Jan 23 17:57:02.146010 kernel: acpiphp: Slot [24] registered Jan 23 17:57:02.146028 kernel: acpiphp: Slot [25] registered Jan 23 17:57:02.146046 kernel: acpiphp: Slot [26] registered Jan 23 17:57:02.146064 kernel: acpiphp: Slot [27] registered Jan 23 17:57:02.146082 kernel: acpiphp: Slot [28] registered Jan 23 17:57:02.146100 kernel: acpiphp: Slot [29] registered Jan 23 17:57:02.146122 kernel: acpiphp: Slot [30] registered Jan 23 17:57:02.146140 kernel: acpiphp: Slot [31] registered Jan 23 17:57:02.146158 kernel: PCI host bridge to bus 0000:00 Jan 23 17:57:02.146364 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 17:57:02.146536 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:57:02.146703 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 17:57:02.147545 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 17:57:02.147801 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:57:02.150016 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 17:57:02.150247 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 17:57:02.150453 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 17:57:02.150648 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 17:57:02.150837 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:57:02.151117 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 17:57:02.151365 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 17:57:02.151572 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 17:57:02.151788 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 17:57:02.152037 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:57:02.152222 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 17:57:02.152434 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:57:02.152614 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 17:57:02.152639 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:57:02.152658 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:57:02.152677 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:57:02.152695 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:57:02.152713 kernel: iommu: Default domain type: Translated Jan 23 17:57:02.152731 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:57:02.152749 kernel: efivars: Registered efivars operations Jan 23 17:57:02.152767 kernel: vgaarb: loaded Jan 23 17:57:02.152790 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:57:02.152808 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:57:02.152826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:57:02.152844 kernel: pnp: PnP ACPI init Jan 23 17:57:02.153126 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 17:57:02.153154 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:57:02.153173 kernel: NET: Registered PF_INET protocol family Jan 23 17:57:02.153191 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:57:02.153215 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:57:02.153234 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:57:02.153252 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:57:02.153270 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:57:02.153288 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:57:02.153306 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:57:02.153325 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:57:02.153343 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:57:02.153361 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:57:02.153383 kernel: kvm [1]: HYP mode not available Jan 23 17:57:02.153401 kernel: Initialise system trusted keyrings Jan 23 17:57:02.153419 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:57:02.153437 kernel: Key type asymmetric registered Jan 23 17:57:02.153455 kernel: Asymmetric key parser 'x509' registered Jan 23 17:57:02.153473 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:57:02.153491 kernel: io scheduler mq-deadline registered Jan 23 17:57:02.153509 kernel: io scheduler kyber registered Jan 23 17:57:02.153527 kernel: io scheduler bfq registered Jan 23 17:57:02.153721 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 17:57:02.153747 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:57:02.153766 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:57:02.153784 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 17:57:02.153802 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 17:57:02.153820 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:57:02.153839 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:57:02.154051 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 17:57:02.154081 kernel: printk: legacy console [ttyS0] disabled Jan 23 17:57:02.154100 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 17:57:02.154119 kernel: printk: legacy console [ttyS0] enabled Jan 23 17:57:02.154136 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 17:57:02.154155 kernel: thunder_xcv, ver 1.0 Jan 23 17:57:02.154172 kernel: thunder_bgx, ver 1.0 Jan 23 17:57:02.154190 kernel: nicpf, ver 1.0 Jan 23 17:57:02.154208 kernel: nicvf, ver 1.0 Jan 23 17:57:02.154406 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:57:02.154586 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:57:01 UTC (1769191021) Jan 23 17:57:02.154611 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:57:02.154630 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 17:57:02.154649 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:57:02.154667 kernel: watchdog: NMI not fully supported Jan 23 17:57:02.154685 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:57:02.154738 kernel: Segment Routing with IPv6 Jan 23 17:57:02.154758 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:57:02.154777 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:57:02.154801 kernel: Key type dns_resolver registered Jan 23 17:57:02.154819 kernel: registered taskstats version 1 Jan 23 17:57:02.154837 kernel: Loading compiled-in X.509 certificates Jan 23 17:57:02.154877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:57:02.154897 kernel: Demotion targets for Node 0: null Jan 23 17:57:02.154916 kernel: Key type .fscrypt registered Jan 23 17:57:02.154933 kernel: Key type fscrypt-provisioning registered Jan 23 17:57:02.154952 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:57:02.154970 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:57:02.154994 kernel: ima: No architecture policies found Jan 23 17:57:02.155013 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:57:02.155030 kernel: clk: Disabling unused clocks Jan 23 17:57:02.155048 kernel: PM: genpd: Disabling unused power domains Jan 23 17:57:02.155067 kernel: Warning: unable to open an initial console. Jan 23 17:57:02.155085 kernel: Freeing unused kernel memory: 39552K Jan 23 17:57:02.155103 kernel: Run /init as init process Jan 23 17:57:02.155121 kernel: with arguments: Jan 23 17:57:02.155139 kernel: /init Jan 23 17:57:02.155160 kernel: with environment: Jan 23 17:57:02.155178 kernel: HOME=/ Jan 23 17:57:02.155196 kernel: TERM=linux Jan 23 17:57:02.155238 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:57:02.155263 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:57:02.155284 systemd[1]: Detected virtualization amazon. Jan 23 17:57:02.155303 systemd[1]: Detected architecture arm64. Jan 23 17:57:02.155327 systemd[1]: Running in initrd. Jan 23 17:57:02.155346 systemd[1]: No hostname configured, using default hostname. Jan 23 17:57:02.155367 systemd[1]: Hostname set to . Jan 23 17:57:02.155386 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:57:02.155404 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:57:02.155424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:57:02.155443 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:57:02.155464 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:57:02.155487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:57:02.155508 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:57:02.155529 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:57:02.155550 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:57:02.155570 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:57:02.155591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:57:02.155610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:57:02.155633 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:57:02.155652 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:57:02.155672 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:57:02.155691 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:57:02.155710 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:57:02.155729 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:57:02.155749 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:57:02.155769 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:57:02.155788 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:57:02.155812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:57:02.155845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:57:02.155908 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:57:02.155929 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:57:02.155949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:57:02.155969 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:57:02.155989 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:57:02.156008 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:57:02.156034 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:57:02.156054 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:57:02.156073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:02.156092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:57:02.156113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:57:02.156137 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:57:02.156157 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:57:02.156177 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:57:02.156196 kernel: Bridge firewalling registered Jan 23 17:57:02.156215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:57:02.156276 systemd-journald[258]: Collecting audit messages is disabled. Jan 23 17:57:02.156324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:02.156345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:57:02.156365 systemd-journald[258]: Journal started Jan 23 17:57:02.156401 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2757b02ea224a61022460123daa111) is 8M, max 75.3M, 67.3M free. Jan 23 17:57:02.080443 systemd-modules-load[259]: Inserted module 'overlay' Jan 23 17:57:02.118109 systemd-modules-load[259]: Inserted module 'br_netfilter' Jan 23 17:57:02.173235 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:57:02.170755 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:02.181082 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:57:02.190174 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:57:02.198926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:02.214593 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:57:02.233585 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:57:02.240988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:57:02.252833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:57:02.271138 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:57:02.281167 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:57:02.296123 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:57:02.357206 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:57:02.380991 systemd-resolved[293]: Positive Trust Anchors: Jan 23 17:57:02.381026 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:57:02.381087 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:57:02.506892 kernel: SCSI subsystem initialized Jan 23 17:57:02.514891 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:57:02.526888 kernel: iscsi: registered transport (tcp) Jan 23 17:57:02.549022 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:57:02.549106 kernel: QLogic iSCSI HBA Driver Jan 23 17:57:02.582043 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:57:02.623432 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:57:02.634330 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:57:02.664937 kernel: random: crng init done Jan 23 17:57:02.665188 systemd-resolved[293]: Defaulting to hostname 'linux'. Jan 23 17:57:02.669007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:57:02.672245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:57:02.723903 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:57:02.731497 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:57:02.830920 kernel: raid6: neonx8 gen() 6561 MB/s Jan 23 17:57:02.847887 kernel: raid6: neonx4 gen() 6602 MB/s Jan 23 17:57:02.864885 kernel: raid6: neonx2 gen() 5475 MB/s Jan 23 17:57:02.881891 kernel: raid6: neonx1 gen() 3955 MB/s Jan 23 17:57:02.898887 kernel: raid6: int64x8 gen() 3673 MB/s Jan 23 17:57:02.915889 kernel: raid6: int64x4 gen() 3729 MB/s Jan 23 17:57:02.932885 kernel: raid6: int64x2 gen() 3619 MB/s Jan 23 17:57:02.950935 kernel: raid6: int64x1 gen() 2765 MB/s Jan 23 17:57:02.950967 kernel: raid6: using algorithm neonx4 gen() 6602 MB/s Jan 23 17:57:02.969908 kernel: raid6: .... xor() 4596 MB/s, rmw enabled Jan 23 17:57:02.969944 kernel: raid6: using neon recovery algorithm Jan 23 17:57:02.978614 kernel: xor: measuring software checksum speed Jan 23 17:57:02.978667 kernel: 8regs : 12938 MB/sec Jan 23 17:57:02.978884 kernel: 32regs : 12097 MB/sec Jan 23 17:57:02.982214 kernel: arm64_neon : 8699 MB/sec Jan 23 17:57:02.982246 kernel: xor: using function: 8regs (12938 MB/sec) Jan 23 17:57:03.074234 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:57:03.085943 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:57:03.096332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:57:03.150405 systemd-udevd[508]: Using default interface naming scheme 'v255'. Jan 23 17:57:03.161076 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:57:03.165635 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:57:03.204485 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 23 17:57:03.253514 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:57:03.261875 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:57:03.389597 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:57:03.405992 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:57:03.567400 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:57:03.567465 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:57:03.567491 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 17:57:03.567780 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 17:57:03.578210 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 17:57:03.580896 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 17:57:03.576959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:57:03.577209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:03.586636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:03.593884 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:a5:28:20:44:97 Jan 23 17:57:03.596655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:03.602782 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 17:57:03.608553 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:57:03.615898 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:57:03.615951 kernel: GPT:9289727 != 33554431 Jan 23 17:57:03.615975 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:57:03.615999 kernel: GPT:9289727 != 33554431 Jan 23 17:57:03.618866 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:57:03.618922 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:57:03.630130 (udev-worker)[556]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:03.653263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:03.669881 kernel: nvme nvme0: using unchecked data buffer Jan 23 17:57:03.761414 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 17:57:03.868266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:57:03.874949 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:57:03.901646 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 17:57:03.941514 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 17:57:03.944424 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 17:57:03.954162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:57:03.959771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:57:03.962946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:57:03.971529 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:57:03.983627 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:57:04.019591 disk-uuid[688]: Primary Header is updated. Jan 23 17:57:04.019591 disk-uuid[688]: Secondary Entries is updated. Jan 23 17:57:04.019591 disk-uuid[688]: Secondary Header is updated. Jan 23 17:57:04.028143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:57:04.039440 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:57:04.066899 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:57:05.076648 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:57:05.078391 disk-uuid[697]: The operation has completed successfully. Jan 23 17:57:05.270365 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:57:05.270924 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:57:05.356506 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:57:05.397285 sh[955]: Success Jan 23 17:57:05.426103 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:57:05.426177 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:57:05.428246 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:57:05.441902 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:57:05.535306 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:57:05.543589 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:57:05.564444 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:57:05.584903 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (978) Jan 23 17:57:05.584967 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:57:05.589033 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:05.727410 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:57:05.727483 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:57:05.727509 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:57:05.755612 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:57:05.760007 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:57:05.765064 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:57:05.770588 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:57:05.778778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:57:05.825926 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1011) Jan 23 17:57:05.831590 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:05.831660 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:05.850741 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:57:05.850811 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:57:05.860925 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:05.864973 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:57:05.871528 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:57:05.961018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:57:05.971531 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:57:06.042523 systemd-networkd[1156]: lo: Link UP Jan 23 17:57:06.042544 systemd-networkd[1156]: lo: Gained carrier Jan 23 17:57:06.048004 systemd-networkd[1156]: Enumeration completed Jan 23 17:57:06.048586 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:57:06.051581 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:06.051588 systemd-networkd[1156]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:57:06.059077 systemd-networkd[1156]: eth0: Link UP Jan 23 17:57:06.059084 systemd-networkd[1156]: eth0: Gained carrier Jan 23 17:57:06.059156 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:06.074748 systemd[1]: Reached target network.target - Network. Jan 23 17:57:06.092936 systemd-networkd[1156]: eth0: DHCPv4 address 172.31.28.159/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:57:06.430027 ignition[1077]: Ignition 2.22.0 Jan 23 17:57:06.431948 ignition[1077]: Stage: fetch-offline Jan 23 17:57:06.434546 ignition[1077]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:06.434584 ignition[1077]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:57:06.435248 ignition[1077]: Ignition finished successfully Jan 23 17:57:06.443869 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:57:06.448721 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:57:06.496351 ignition[1169]: Ignition 2.22.0 Jan 23 17:57:06.496381 ignition[1169]: Stage: fetch Jan 23 17:57:06.497693 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:06.498039 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:57:06.498345 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:57:06.513836 ignition[1169]: PUT result: OK Jan 23 17:57:06.517048 ignition[1169]: parsed url from cmdline: "" Jan 23 17:57:06.517072 ignition[1169]: no config URL provided Jan 23 17:57:06.517088 ignition[1169]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:57:06.517114 ignition[1169]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:57:06.517147 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:57:06.519151 ignition[1169]: PUT result: OK Jan 23 17:57:06.519269 ignition[1169]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 17:57:06.523957 ignition[1169]: GET result: OK Jan 23 17:57:06.524127 ignition[1169]: parsing config with SHA512: 093301e44b808d5e259109a8e46c7fa1d5923c52a0a7c89633be26d59e5fa1219f13019f10632825b18278898726aacc49a6e59a6d4f491923cb7622e8001b1c Jan 23 17:57:06.540805 unknown[1169]: fetched base config from "system" Jan 23 17:57:06.540846 unknown[1169]: fetched base config from "system" Jan 23 17:57:06.540894 unknown[1169]: fetched user config from "aws" Jan 23 17:57:06.545090 ignition[1169]: fetch: fetch complete Jan 23 17:57:06.545111 ignition[1169]: fetch: fetch passed Jan 23 17:57:06.545200 ignition[1169]: Ignition finished successfully Jan 23 17:57:06.555078 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:57:06.561254 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:57:06.620009 ignition[1175]: Ignition 2.22.0 Jan 23 17:57:06.620517 ignition[1175]: Stage: kargs Jan 23 17:57:06.621081 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:06.621104 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:57:06.621234 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:57:06.630754 ignition[1175]: PUT result: OK Jan 23 17:57:06.635314 ignition[1175]: kargs: kargs passed Jan 23 17:57:06.635627 ignition[1175]: Ignition finished successfully Jan 23 17:57:06.643965 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:57:06.652476 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:57:06.704187 ignition[1181]: Ignition 2.22.0 Jan 23 17:57:06.704710 ignition[1181]: Stage: disks Jan 23 17:57:06.705334 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:06.705356 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:57:06.705520 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:57:06.719440 ignition[1181]: PUT result: OK Jan 23 17:57:06.724348 ignition[1181]: disks: disks passed Jan 23 17:57:06.725405 ignition[1181]: Ignition finished successfully Jan 23 17:57:06.730473 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:57:06.735375 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:57:06.740786 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:57:06.741706 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:57:06.742430 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:57:06.742806 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:57:06.759314 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:57:06.817435 systemd-fsck[1189]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 17:57:06.823498 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:57:06.830737 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:57:06.968889 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:57:06.970546 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:57:06.976783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:57:06.984996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:57:06.992007 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:57:06.997205 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:57:06.997286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:57:06.997333 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:57:07.021520 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:57:07.027149 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:57:07.046256 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1208) Jan 23 17:57:07.046320 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:07.048730 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:07.056499 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:57:07.056559 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:57:07.059444 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:57:07.383655 initrd-setup-root[1232]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:57:07.403662 initrd-setup-root[1239]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:57:07.427582 initrd-setup-root[1246]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:57:07.448714 initrd-setup-root[1253]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:57:07.777638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:57:07.786508 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:57:07.795642 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:57:07.823779 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:57:07.826882 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:07.855702 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:57:07.880877 ignition[1320]: INFO : Ignition 2.22.0 Jan 23 17:57:07.880877 ignition[1320]: INFO : Stage: mount Jan 23 17:57:07.884702 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:07.884702 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:57:07.884702 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:57:07.892892 ignition[1320]: INFO : PUT result: OK Jan 23 17:57:07.897394 ignition[1320]: INFO : mount: mount passed Jan 23 17:57:07.899996 ignition[1320]: INFO : Ignition finished successfully Jan 23 17:57:07.902389 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:57:07.908975 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:57:07.945985 systemd-networkd[1156]: eth0: Gained IPv6LL Jan 23 17:57:07.973871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:57:08.009905 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1332) Jan 23 17:57:08.014727 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:08.014781 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:08.022036 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:57:08.022116 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:57:08.025249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:57:08.074419 ignition[1349]: INFO : Ignition 2.22.0 Jan 23 17:57:08.076526 ignition[1349]: INFO : Stage: files Jan 23 17:57:08.076526 ignition[1349]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:08.076526 ignition[1349]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:57:08.076526 ignition[1349]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:57:08.086123 ignition[1349]: INFO : PUT result: OK Jan 23 17:57:08.094312 ignition[1349]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:57:08.097105 ignition[1349]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:57:08.097105 ignition[1349]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:57:08.117327 ignition[1349]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:57:08.123523 ignition[1349]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:57:08.127277 unknown[1349]: wrote ssh authorized keys file for user: core Jan 23 17:57:08.129809 ignition[1349]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:57:08.134460 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:57:08.134460 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:57:08.759132 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:57:09.625844 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:57:09.630912 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:57:09.630912 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 17:57:09.846418 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 17:57:09.963288 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:57:09.963288 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:57:09.974337 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:57:10.016956 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:57:10.016956 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:57:10.016956 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 17:57:10.538608 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 17:57:10.944570 ignition[1349]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:57:10.944570 ignition[1349]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 17:57:10.959405 ignition[1349]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:57:10.964478 ignition[1349]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:57:10.964478 ignition[1349]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 17:57:10.964478 ignition[1349]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:57:10.964478 ignition[1349]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:57:10.980167 ignition[1349]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:57:10.980167 ignition[1349]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:57:10.980167 ignition[1349]: INFO : files: files passed Jan 23 17:57:10.980167 ignition[1349]: INFO : Ignition finished successfully Jan 23 17:57:10.992160 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:57:11.000567 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:57:11.010946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:57:11.028178 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:57:11.031495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:57:11.050686 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:57:11.050686 initrd-setup-root-after-ignition[1379]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:57:11.058898 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:57:11.065809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:57:11.072222 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:57:11.079598 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:57:11.173544 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:57:11.173799 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:57:11.180226 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:57:11.182703 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:57:11.188599 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:57:11.189948 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:57:11.233285 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:57:11.240326 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:57:11.285470 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:57:11.285886 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:57:11.295338 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:57:11.300668 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:57:11.303884 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:57:11.310371 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:57:11.310622 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:57:11.319053 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:57:11.323767 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:57:11.326863 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:57:11.330483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:57:11.333553 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:57:11.338164 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:57:11.341330 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:57:11.345971 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:57:11.349221 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:57:11.356217 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:57:11.359294 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:57:11.366253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:57:11.366360 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:57:11.371612 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:57:11.377763 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:57:11.380782 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:57:11.385002 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:57:11.387910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:57:11.388753 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:57:11.397283 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:57:11.397378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:57:11.400866 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:57:11.400949 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:57:11.410665 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:57:11.427009 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:57:11.427120 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:57:11.447267 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:57:11.455437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:57:11.459153 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:57:11.464843 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:57:11.465499 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:57:11.500257 ignition[1404]: INFO : Ignition 2.22.0 Jan 23 17:57:11.503669 ignition[1404]: INFO : Stage: umount Jan 23 17:57:11.503669 ignition[1404]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:11.503669 ignition[1404]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:57:11.503669 ignition[1404]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:57:11.517024 ignition[1404]: INFO : PUT result: OK Jan 23 17:57:11.520196 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:57:11.525568 ignition[1404]: INFO : umount: umount passed Jan 23 17:57:11.525568 ignition[1404]: INFO : Ignition finished successfully Jan 23 17:57:11.532638 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:57:11.533068 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:57:11.542406 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:57:11.542498 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:57:11.545315 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:57:11.545397 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:57:11.554294 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:57:11.554430 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:57:11.559191 systemd[1]: Stopped target network.target - Network. Jan 23 17:57:11.562789 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:57:11.562959 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:57:11.567627 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:57:11.569557 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:57:11.573605 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:57:11.573731 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:57:11.580750 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:57:11.582756 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:57:11.582829 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:57:11.587288 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:57:11.587356 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:57:11.592617 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:57:11.592710 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:57:11.599200 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:57:11.599297 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:57:11.602026 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:57:11.604488 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:57:11.643579 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:57:11.646703 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:57:11.654526 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:57:11.657516 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:57:11.659370 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:57:11.668186 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:57:11.669290 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:57:11.669490 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:57:11.677348 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:57:11.682343 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:57:11.682449 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:57:11.683056 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:57:11.683154 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:57:11.686008 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:57:11.686469 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:57:11.686566 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:57:11.690736 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:57:11.693164 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:11.701524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:57:11.701641 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:57:11.704366 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:57:11.706867 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:57:11.715305 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:57:11.739571 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:57:11.739711 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:57:11.762380 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:57:11.771603 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:57:11.778223 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:57:11.778321 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:57:11.787497 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:57:11.787572 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:57:11.792953 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:57:11.793057 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:57:11.801334 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:57:11.801431 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:57:11.809104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:57:11.809197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:57:11.818695 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:57:11.826523 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:57:11.826657 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:57:11.835280 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:57:11.835377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:57:11.844087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:57:11.844175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:11.854509 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 17:57:11.854615 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 17:57:11.854699 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:57:11.860377 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:57:11.863936 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:57:11.887943 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:57:11.888192 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:57:11.892380 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:57:11.901402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:57:11.935631 systemd[1]: Switching root. Jan 23 17:57:12.023318 systemd-journald[258]: Journal stopped Jan 23 17:57:14.847572 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Jan 23 17:57:14.847705 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:57:14.847747 kernel: SELinux: policy capability open_perms=1 Jan 23 17:57:14.847775 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:57:14.847827 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:57:14.847885 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:57:14.847921 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:57:14.847952 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:57:14.847980 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:57:14.848009 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:57:14.848038 kernel: audit: type=1403 audit(1769191032.629:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:57:14.848070 systemd[1]: Successfully loaded SELinux policy in 127.215ms. Jan 23 17:57:14.848121 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.003ms. Jan 23 17:57:14.848160 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:57:14.848191 systemd[1]: Detected virtualization amazon. Jan 23 17:57:14.848229 systemd[1]: Detected architecture arm64. Jan 23 17:57:14.848257 systemd[1]: Detected first boot. Jan 23 17:57:14.848295 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:57:14.848326 zram_generator::config[1446]: No configuration found. Jan 23 17:57:14.848358 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:57:14.848389 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:57:14.848422 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:57:14.848455 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:57:14.848482 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:57:14.848512 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:57:14.848542 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:57:14.848574 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:57:14.848605 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:57:14.848632 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:57:14.848664 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:57:14.848699 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:57:14.848731 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:57:14.848758 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:57:14.848795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:57:14.848826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:57:14.851037 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:57:14.851096 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:57:14.851132 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:57:14.851186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:57:14.851227 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 17:57:14.851258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:57:14.851286 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:57:14.851314 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:57:14.851346 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:57:14.851376 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:57:14.851406 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:57:14.851441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:57:14.851471 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:57:14.851499 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:57:14.851529 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:57:14.851558 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:57:14.851596 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:57:14.851624 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:57:14.851653 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:57:14.851683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:57:14.851715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:57:14.851748 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:57:14.851776 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:57:14.851812 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:57:14.851841 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:57:14.851917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:57:14.853942 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:57:14.853987 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:57:14.854018 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:57:14.854083 systemd[1]: Reached target machines.target - Containers. Jan 23 17:57:14.854119 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:57:14.854148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:14.854176 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:57:14.854204 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:57:14.854232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:57:14.854260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:57:14.854293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:57:14.854325 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:57:14.854361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:57:14.854392 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:57:14.854424 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:57:14.854453 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:57:14.854480 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:57:14.854508 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:57:14.854537 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:14.854565 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:57:14.854597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:57:14.854625 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:57:14.854656 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:57:14.854686 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:57:14.854714 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:57:14.854747 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:57:14.854776 systemd[1]: Stopped verity-setup.service. Jan 23 17:57:14.854803 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:57:14.854831 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:57:14.858478 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:57:14.858523 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:57:14.858561 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:57:14.858593 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:57:14.858621 kernel: fuse: init (API version 7.41) Jan 23 17:57:14.858651 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:57:14.858679 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:57:14.858707 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:57:14.858735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:57:14.858764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:57:14.858792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:57:14.858824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:57:14.858888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:57:14.858920 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:57:14.858950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:57:14.858979 kernel: loop: module loaded Jan 23 17:57:14.859009 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:57:14.859037 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:57:14.859065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:57:14.859092 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:57:14.859125 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:57:14.859174 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:57:14.859205 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:57:14.859237 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:57:14.859266 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:57:14.859295 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:57:14.859325 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:57:14.859353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:14.859440 systemd-journald[1529]: Collecting audit messages is disabled. Jan 23 17:57:14.859489 kernel: ACPI: bus type drm_connector registered Jan 23 17:57:14.859519 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:57:14.859549 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:57:14.859581 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:57:14.859612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:57:14.859639 systemd-journald[1529]: Journal started Jan 23 17:57:14.859686 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec2757b02ea224a61022460123daa111) is 8M, max 75.3M, 67.3M free. Jan 23 17:57:14.127375 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:57:14.139636 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 17:57:14.140487 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:57:14.870450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:14.880896 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:57:14.887975 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:57:14.893370 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:57:14.897606 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:57:14.899087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:57:14.902456 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:57:14.906250 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:57:14.909387 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:57:14.965779 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:57:14.972826 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:57:14.976863 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:57:14.981989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:57:14.988706 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:57:15.008953 kernel: loop0: detected capacity change from 0 to 61264 Jan 23 17:57:15.050409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:15.053521 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec2757b02ea224a61022460123daa111 is 37.353ms for 934 entries. Jan 23 17:57:15.053521 systemd-journald[1529]: System Journal (/var/log/journal/ec2757b02ea224a61022460123daa111) is 8M, max 195.6M, 187.6M free. Jan 23 17:57:15.107504 systemd-journald[1529]: Received client request to flush runtime journal. Jan 23 17:57:15.055296 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:57:15.112437 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:57:15.139562 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:57:15.143942 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:57:15.147898 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:57:15.159559 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:57:15.169123 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:57:15.191984 kernel: loop1: detected capacity change from 0 to 119840 Jan 23 17:57:15.215822 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jan 23 17:57:15.216418 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jan 23 17:57:15.229071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:57:15.316527 kernel: loop2: detected capacity change from 0 to 211168 Jan 23 17:57:15.358932 kernel: loop3: detected capacity change from 0 to 100632 Jan 23 17:57:15.469900 kernel: loop4: detected capacity change from 0 to 61264 Jan 23 17:57:15.489905 kernel: loop5: detected capacity change from 0 to 119840 Jan 23 17:57:15.515272 kernel: loop6: detected capacity change from 0 to 211168 Jan 23 17:57:15.557895 kernel: loop7: detected capacity change from 0 to 100632 Jan 23 17:57:15.570788 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 17:57:15.579916 (sd-merge)[1607]: Merged extensions into '/usr'. Jan 23 17:57:15.589312 systemd[1]: Reload requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:57:15.589351 systemd[1]: Reloading... Jan 23 17:57:15.760881 zram_generator::config[1636]: No configuration found. Jan 23 17:57:16.248074 systemd[1]: Reloading finished in 657 ms. Jan 23 17:57:16.268672 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:57:16.272260 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:57:16.291943 systemd[1]: Starting ensure-sysext.service... Jan 23 17:57:16.297170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:57:16.306219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:57:16.347055 systemd[1]: Reload requested from client PID 1685 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:57:16.347086 systemd[1]: Reloading... Jan 23 17:57:16.410124 systemd-udevd[1687]: Using default interface naming scheme 'v255'. Jan 23 17:57:16.415723 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:57:16.415795 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:57:16.416425 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:57:16.417611 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:57:16.422566 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:57:16.423241 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Jan 23 17:57:16.423388 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Jan 23 17:57:16.440045 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:57:16.440071 systemd-tmpfiles[1686]: Skipping /boot Jan 23 17:57:16.470241 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:57:16.470271 systemd-tmpfiles[1686]: Skipping /boot Jan 23 17:57:16.573901 zram_generator::config[1715]: No configuration found. Jan 23 17:57:16.791878 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:57:16.879053 (udev-worker)[1719]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:17.133520 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 17:57:17.134759 systemd[1]: Reloading finished in 787 ms. Jan 23 17:57:17.167110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:57:17.170687 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:57:17.195014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:57:17.247544 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:57:17.254406 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:57:17.261526 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:57:17.272709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:57:17.319018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:57:17.330430 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:57:17.340697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:17.344406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:57:17.354380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:57:17.360334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:57:17.362896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:17.363115 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:17.375029 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:57:17.384751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:17.386901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:17.387126 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:17.401804 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:17.406071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:57:17.409588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:17.409825 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:17.410182 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:57:17.416025 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:57:17.431340 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:57:17.448202 systemd[1]: Finished ensure-sysext.service. Jan 23 17:57:17.468965 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:57:17.486993 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:57:17.516613 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:57:17.518998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:57:17.587579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:57:17.589947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:57:17.593494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:57:17.594133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:57:17.598609 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:57:17.599225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:57:17.603793 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:57:17.603947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:57:17.612741 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:57:17.616179 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:57:17.646588 augenrules[1909]: No rules Jan 23 17:57:17.651609 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:57:17.652151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:57:17.734959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:17.936416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:57:17.937827 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:57:17.947714 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:57:18.008962 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:57:18.087403 systemd-networkd[1830]: lo: Link UP Jan 23 17:57:18.087990 systemd-networkd[1830]: lo: Gained carrier Jan 23 17:57:18.091277 systemd-networkd[1830]: Enumeration completed Jan 23 17:57:18.091684 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:57:18.092768 systemd-networkd[1830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:18.092777 systemd-networkd[1830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:57:18.097361 systemd-resolved[1831]: Positive Trust Anchors: Jan 23 17:57:18.097385 systemd-resolved[1831]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:57:18.097448 systemd-resolved[1831]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:57:18.099767 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:57:18.105804 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:57:18.109721 systemd-networkd[1830]: eth0: Link UP Jan 23 17:57:18.110163 systemd-networkd[1830]: eth0: Gained carrier Jan 23 17:57:18.110202 systemd-networkd[1830]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:18.117804 systemd-resolved[1831]: Defaulting to hostname 'linux'. Jan 23 17:57:18.121499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:57:18.122448 systemd[1]: Reached target network.target - Network. Jan 23 17:57:18.125514 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:57:18.130004 systemd-networkd[1830]: eth0: DHCPv4 address 172.31.28.159/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:57:18.167969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:18.171617 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:57:18.176718 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:57:18.180018 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:57:18.183419 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:57:18.186789 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:57:18.189622 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:57:18.192670 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:57:18.195587 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:57:18.195650 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:57:18.197838 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:57:18.201365 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:57:18.206201 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:57:18.213638 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:57:18.216965 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:57:18.220143 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:57:18.226196 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:57:18.229567 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:57:18.233820 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:57:18.237079 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:57:18.239386 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:57:18.241677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:57:18.241733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:57:18.244234 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:57:18.252161 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:57:18.259297 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:57:18.264358 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:57:18.272213 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:57:18.281344 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:57:18.283916 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:57:18.292778 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:57:18.307348 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:57:18.319739 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:57:18.329247 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 17:57:18.337359 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:57:18.347105 jq[1975]: false Jan 23 17:57:18.348399 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:57:18.364027 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:57:18.369220 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:57:18.371310 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:57:18.379324 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:57:18.390235 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:57:18.402525 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:57:18.407542 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:57:18.408014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:57:18.426888 extend-filesystems[1976]: Found /dev/nvme0n1p6 Jan 23 17:57:18.471023 extend-filesystems[1976]: Found /dev/nvme0n1p9 Jan 23 17:57:18.483576 extend-filesystems[1976]: Checking size of /dev/nvme0n1p9 Jan 23 17:57:18.493589 jq[1992]: true Jan 23 17:57:18.518958 tar[2003]: linux-arm64/LICENSE Jan 23 17:57:18.519449 tar[2003]: linux-arm64/helm Jan 23 17:57:18.533688 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:57:18.536055 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:57:18.557052 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:57:18.561016 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:57:18.614071 extend-filesystems[1976]: Resized partition /dev/nvme0n1p9 Jan 23 17:57:18.622775 dbus-daemon[1973]: [system] SELinux support is enabled Jan 23 17:57:18.623094 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:57:18.638470 dbus-daemon[1973]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1830 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 17:57:18.629442 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:57:18.629488 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:57:18.632614 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:57:18.632647 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:57:18.642559 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 17:57:18.649232 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 17:57:18.660920 extend-filesystems[2027]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:57:18.660778 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: ---------------------------------------------------- Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: corporation. Support and training for ntp-4 are Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: available at https://www.nwtime.org/support Jan 23 17:57:18.668513 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: ---------------------------------------------------- Jan 23 17:57:18.664564 ntpd[1978]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:57:18.664672 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:57:18.664692 ntpd[1978]: ---------------------------------------------------- Jan 23 17:57:18.664709 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:57:18.664725 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:57:18.664741 ntpd[1978]: corporation. Support and training for ntp-4 are Jan 23 17:57:18.664758 ntpd[1978]: available at https://www.nwtime.org/support Jan 23 17:57:18.664773 ntpd[1978]: ---------------------------------------------------- Jan 23 17:57:18.682942 jq[2011]: true Jan 23 17:57:18.687889 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 17:57:18.687975 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: proto: precision = 0.108 usec (-23) Jan 23 17:57:18.688120 coreos-metadata[1972]: Jan 23 17:57:18.687 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:57:18.686685 ntpd[1978]: proto: precision = 0.108 usec (-23) Jan 23 17:57:18.695532 ntpd[1978]: basedate set to 2026-01-11 Jan 23 17:57:18.697456 coreos-metadata[1972]: Jan 23 17:57:18.697 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: basedate set to 2026-01-11 Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: gps base set to 2026-01-11 (week 2401) Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: Listen normally on 3 eth0 172.31.28.159:123 Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: Listen normally on 4 lo [::1]:123 Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: bind(21) AF_INET6 [fe80::4a5:28ff:fe20:4497%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:57:18.697558 ntpd[1978]: 23 Jan 17:57:18 ntpd[1978]: unable to create socket on eth0 (5) for [fe80::4a5:28ff:fe20:4497%2]:123 Jan 23 17:57:18.695574 ntpd[1978]: gps base set to 2026-01-11 (week 2401) Jan 23 17:57:18.695756 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:57:18.695801 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:57:18.696129 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:57:18.696174 ntpd[1978]: Listen normally on 3 eth0 172.31.28.159:123 Jan 23 17:57:18.696220 ntpd[1978]: Listen normally on 4 lo [::1]:123 Jan 23 17:57:18.696266 ntpd[1978]: bind(21) AF_INET6 [fe80::4a5:28ff:fe20:4497%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:57:18.696302 ntpd[1978]: unable to create socket on eth0 (5) for [fe80::4a5:28ff:fe20:4497%2]:123 Jan 23 17:57:18.709769 systemd-coredump[2030]: Process 1978 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 17:57:18.721525 coreos-metadata[1972]: Jan 23 17:57:18.711 INFO Fetch successful Jan 23 17:57:18.721525 coreos-metadata[1972]: Jan 23 17:57:18.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 17:57:18.721525 coreos-metadata[1972]: Jan 23 17:57:18.715 INFO Fetch successful Jan 23 17:57:18.721525 coreos-metadata[1972]: Jan 23 17:57:18.715 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 17:57:18.718089 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 17:57:18.726897 coreos-metadata[1972]: Jan 23 17:57:18.722 INFO Fetch successful Jan 23 17:57:18.726897 coreos-metadata[1972]: Jan 23 17:57:18.722 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 17:57:18.727657 systemd[1]: Started systemd-coredump@0-2030-0.service - Process Core Dump (PID 2030/UID 0). Jan 23 17:57:18.735976 update_engine[1988]: I20260123 17:57:18.713774 1988 main.cc:92] Flatcar Update Engine starting Jan 23 17:57:18.736476 coreos-metadata[1972]: Jan 23 17:57:18.733 INFO Fetch successful Jan 23 17:57:18.736476 coreos-metadata[1972]: Jan 23 17:57:18.733 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 17:57:18.739767 coreos-metadata[1972]: Jan 23 17:57:18.736 INFO Fetch failed with 404: resource not found Jan 23 17:57:18.739767 coreos-metadata[1972]: Jan 23 17:57:18.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 17:57:18.746705 coreos-metadata[1972]: Jan 23 17:57:18.742 INFO Fetch successful Jan 23 17:57:18.746705 coreos-metadata[1972]: Jan 23 17:57:18.742 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 17:57:18.746705 coreos-metadata[1972]: Jan 23 17:57:18.744 INFO Fetch successful Jan 23 17:57:18.746705 coreos-metadata[1972]: Jan 23 17:57:18.744 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 17:57:18.754968 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:57:18.766678 coreos-metadata[1972]: Jan 23 17:57:18.757 INFO Fetch successful Jan 23 17:57:18.766678 coreos-metadata[1972]: Jan 23 17:57:18.757 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 17:57:18.766678 coreos-metadata[1972]: Jan 23 17:57:18.757 INFO Fetch successful Jan 23 17:57:18.766678 coreos-metadata[1972]: Jan 23 17:57:18.757 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 17:57:18.761104 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:57:18.776215 coreos-metadata[1972]: Jan 23 17:57:18.769 INFO Fetch successful Jan 23 17:57:18.776334 update_engine[1988]: I20260123 17:57:18.770169 1988 update_check_scheduler.cc:74] Next update check in 2m28s Jan 23 17:57:18.792805 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 17:57:18.888922 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 17:57:18.885875 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:57:18.888757 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:57:18.908948 extend-filesystems[2027]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 17:57:18.908948 extend-filesystems[2027]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 17:57:18.908948 extend-filesystems[2027]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 17:57:18.938080 extend-filesystems[1976]: Resized filesystem in /dev/nvme0n1p9 Jan 23 17:57:18.914345 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:57:18.914938 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:57:18.962327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:57:18.982817 bash[2065]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:57:18.985560 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:57:18.994259 systemd[1]: Starting sshkeys.service... Jan 23 17:57:19.037798 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:57:19.044545 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:57:19.195736 systemd-logind[1983]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:57:19.195790 systemd-logind[1983]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 17:57:19.204204 systemd-logind[1983]: New seat seat0. Jan 23 17:57:19.207961 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:57:19.233492 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 17:57:19.235800 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 17:57:19.251298 dbus-daemon[1973]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 17:57:19.263368 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 17:57:19.337981 systemd-networkd[1830]: eth0: Gained IPv6LL Jan 23 17:57:19.366759 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:57:19.371694 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:57:19.377554 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 17:57:19.392400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:19.398576 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:57:19.483267 coreos-metadata[2078]: Jan 23 17:57:19.482 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:57:19.484754 coreos-metadata[2078]: Jan 23 17:57:19.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 17:57:19.491567 coreos-metadata[2078]: Jan 23 17:57:19.489 INFO Fetch successful Jan 23 17:57:19.491567 coreos-metadata[2078]: Jan 23 17:57:19.489 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:57:19.494773 coreos-metadata[2078]: Jan 23 17:57:19.494 INFO Fetch successful Jan 23 17:57:19.502396 unknown[2078]: wrote ssh authorized keys file for user: core Jan 23 17:57:19.615884 update-ssh-keys[2153]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:57:19.618442 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:57:19.631328 systemd[1]: Finished sshkeys.service. Jan 23 17:57:19.638972 systemd-coredump[2031]: Process 1978 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1978: #0 0x0000aaaab0490b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab043fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab0440240 n/a (ntpd + 0x10240) #3 0x0000aaaab043be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab043d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab0445a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab043738c n/a (ntpd + 0x738c) #7 0x0000ffff87452034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff87452118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab04373f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 23 17:57:19.672817 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 17:57:19.673543 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 17:57:19.680607 systemd[1]: systemd-coredump@0-2030-0.service: Deactivated successfully. Jan 23 17:57:19.731486 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:57:19.751720 amazon-ssm-agent[2137]: Initializing new seelog logger Jan 23 17:57:19.751720 amazon-ssm-agent[2137]: New Seelog Logger Creation Complete Jan 23 17:57:19.751720 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.751720 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.752348 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 processing appconfig overrides Jan 23 17:57:19.754731 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.754731 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.754731 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 processing appconfig overrides Jan 23 17:57:19.754731 amazon-ssm-agent[2137]: 2026-01-23 17:57:19.7524 INFO Proxy environment variables: Jan 23 17:57:19.754731 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.754731 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.754731 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 processing appconfig overrides Jan 23 17:57:19.760875 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.760875 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:19.760875 amazon-ssm-agent[2137]: 2026/01/23 17:57:19 processing appconfig overrides Jan 23 17:57:19.784451 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 17:57:19.787679 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:57:19.853497 amazon-ssm-agent[2137]: 2026-01-23 17:57:19.7524 INFO https_proxy: Jan 23 17:57:19.953323 amazon-ssm-agent[2137]: 2026-01-23 17:57:19.7525 INFO http_proxy: Jan 23 17:57:19.970286 ntpd[2186]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:57:19.970399 ntpd[2186]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: ---------------------------------------------------- Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: corporation. Support and training for ntp-4 are Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: available at https://www.nwtime.org/support Jan 23 17:57:19.970945 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: ---------------------------------------------------- Jan 23 17:57:19.970418 ntpd[2186]: ---------------------------------------------------- Jan 23 17:57:19.970434 ntpd[2186]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:57:19.970450 ntpd[2186]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:57:19.970466 ntpd[2186]: corporation. Support and training for ntp-4 are Jan 23 17:57:19.970481 ntpd[2186]: available at https://www.nwtime.org/support Jan 23 17:57:19.970498 ntpd[2186]: ---------------------------------------------------- Jan 23 17:57:19.987941 ntpd[2186]: proto: precision = 0.096 usec (-23) Jan 23 17:57:19.988148 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: proto: precision = 0.096 usec (-23) Jan 23 17:57:19.988295 ntpd[2186]: basedate set to 2026-01-11 Jan 23 17:57:19.988330 ntpd[2186]: gps base set to 2026-01-11 (week 2401) Jan 23 17:57:19.988439 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: basedate set to 2026-01-11 Jan 23 17:57:19.988439 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: gps base set to 2026-01-11 (week 2401) Jan 23 17:57:19.988527 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:57:19.988527 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:57:19.988460 ntpd[2186]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:57:19.988502 ntpd[2186]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:57:19.988758 ntpd[2186]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:57:19.990707 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:57:19.990707 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Listen normally on 3 eth0 172.31.28.159:123 Jan 23 17:57:19.988815 ntpd[2186]: Listen normally on 3 eth0 172.31.28.159:123 Jan 23 17:57:19.997116 ntpd[2186]: Listen normally on 4 lo [::1]:123 Jan 23 17:57:19.997210 ntpd[2186]: Listen normally on 5 eth0 [fe80::4a5:28ff:fe20:4497%2]:123 Jan 23 17:57:19.997350 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Listen normally on 4 lo [::1]:123 Jan 23 17:57:19.997350 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Listen normally on 5 eth0 [fe80::4a5:28ff:fe20:4497%2]:123 Jan 23 17:57:19.997350 ntpd[2186]: 23 Jan 17:57:19 ntpd[2186]: Listening on routing socket on fd #22 for interface updates Jan 23 17:57:19.997256 ntpd[2186]: Listening on routing socket on fd #22 for interface updates Jan 23 17:57:20.053761 amazon-ssm-agent[2137]: 2026-01-23 17:57:19.7525 INFO no_proxy: Jan 23 17:57:20.055801 ntpd[2186]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:57:20.058032 ntpd[2186]: 23 Jan 17:57:20 ntpd[2186]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:57:20.058032 ntpd[2186]: 23 Jan 17:57:20 ntpd[2186]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:57:20.055901 ntpd[2186]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:57:20.087264 locksmithd[2035]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:57:20.125922 containerd[2018]: time="2026-01-23T17:57:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:57:20.130327 containerd[2018]: time="2026-01-23T17:57:20.130234980Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:57:20.136306 polkitd[2119]: Started polkitd version 126 Jan 23 17:57:20.154449 amazon-ssm-agent[2137]: 2026-01-23 17:57:19.7526 INFO Checking if agent identity type OnPrem can be assumed Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.211832424Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.112µs" Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.211916388Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.211957248Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.212263884Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.212305488Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.212358804Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.212465520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:57:20.212879 containerd[2018]: time="2026-01-23T17:57:20.212490444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:57:20.214105 containerd[2018]: time="2026-01-23T17:57:20.213962820Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:57:20.214105 containerd[2018]: time="2026-01-23T17:57:20.214047384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:57:20.214105 containerd[2018]: time="2026-01-23T17:57:20.214079712Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:57:20.214105 containerd[2018]: time="2026-01-23T17:57:20.214101780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:57:20.214355 containerd[2018]: time="2026-01-23T17:57:20.214294500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:57:20.216515 containerd[2018]: time="2026-01-23T17:57:20.214770684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:57:20.216515 containerd[2018]: time="2026-01-23T17:57:20.214875708Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:57:20.216515 containerd[2018]: time="2026-01-23T17:57:20.214903680Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:57:20.215684 polkitd[2119]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 17:57:20.220554 containerd[2018]: time="2026-01-23T17:57:20.217935732Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:57:20.220554 containerd[2018]: time="2026-01-23T17:57:20.218677884Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:57:20.220554 containerd[2018]: time="2026-01-23T17:57:20.218844660Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:57:20.224870 containerd[2018]: time="2026-01-23T17:57:20.224779921Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:57:20.224980 containerd[2018]: time="2026-01-23T17:57:20.224911105Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:57:20.224980 containerd[2018]: time="2026-01-23T17:57:20.224948545Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:57:20.225081 containerd[2018]: time="2026-01-23T17:57:20.224978137Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:57:20.225081 containerd[2018]: time="2026-01-23T17:57:20.225007693Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:57:20.225081 containerd[2018]: time="2026-01-23T17:57:20.225033781Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:57:20.225081 containerd[2018]: time="2026-01-23T17:57:20.225060949Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:57:20.225269 containerd[2018]: time="2026-01-23T17:57:20.225090553Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:57:20.225269 containerd[2018]: time="2026-01-23T17:57:20.225119317Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:57:20.225269 containerd[2018]: time="2026-01-23T17:57:20.225148273Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:57:20.225269 containerd[2018]: time="2026-01-23T17:57:20.225172669Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:57:20.225269 containerd[2018]: time="2026-01-23T17:57:20.225201901Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:57:20.225483 containerd[2018]: time="2026-01-23T17:57:20.225412417Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:57:20.225483 containerd[2018]: time="2026-01-23T17:57:20.225450373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:57:20.225572 containerd[2018]: time="2026-01-23T17:57:20.225483385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:57:20.225572 containerd[2018]: time="2026-01-23T17:57:20.225510937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:57:20.225572 containerd[2018]: time="2026-01-23T17:57:20.225538021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:57:20.225572 containerd[2018]: time="2026-01-23T17:57:20.225564133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:57:20.225738 containerd[2018]: time="2026-01-23T17:57:20.225604825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:57:20.225738 containerd[2018]: time="2026-01-23T17:57:20.225632353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:57:20.225738 containerd[2018]: time="2026-01-23T17:57:20.225660373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:57:20.225738 containerd[2018]: time="2026-01-23T17:57:20.225686485Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:57:20.225738 containerd[2018]: time="2026-01-23T17:57:20.225714325Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:57:20.227166 polkitd[2119]: Loading rules from directory /run/polkit-1/rules.d Jan 23 17:57:20.229315 containerd[2018]: time="2026-01-23T17:57:20.229103317Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:57:20.229315 containerd[2018]: time="2026-01-23T17:57:20.229154557Z" level=info msg="Start snapshots syncer" Jan 23 17:57:20.227267 polkitd[2119]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:57:20.227904 polkitd[2119]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 17:57:20.227955 polkitd[2119]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:57:20.228039 polkitd[2119]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 17:57:20.231882 containerd[2018]: time="2026-01-23T17:57:20.230920189Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:57:20.231882 containerd[2018]: time="2026-01-23T17:57:20.231598621Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:57:20.232174 containerd[2018]: time="2026-01-23T17:57:20.231694753Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:57:20.232174 containerd[2018]: time="2026-01-23T17:57:20.231799333Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:57:20.232531 containerd[2018]: time="2026-01-23T17:57:20.232490341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:57:20.232669 containerd[2018]: time="2026-01-23T17:57:20.232642081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:57:20.232775 containerd[2018]: time="2026-01-23T17:57:20.232745473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:57:20.233971 containerd[2018]: time="2026-01-23T17:57:20.233931325Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:57:20.235269 containerd[2018]: time="2026-01-23T17:57:20.234942733Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:57:20.235269 containerd[2018]: time="2026-01-23T17:57:20.234991837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:57:20.235269 containerd[2018]: time="2026-01-23T17:57:20.235054129Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:57:20.235269 containerd[2018]: time="2026-01-23T17:57:20.235109461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:57:20.235269 containerd[2018]: time="2026-01-23T17:57:20.235155853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:57:20.235269 containerd[2018]: time="2026-01-23T17:57:20.235186849Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.236515957Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.236675449Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.236701885Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.236731909Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.236754565Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.236779633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.236806501Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.237016561Z" level=info msg="runtime interface created" Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.237037129Z" level=info msg="created NRI interface" Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.237059257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.237095377Z" level=info msg="Connect containerd service" Jan 23 17:57:20.237914 containerd[2018]: time="2026-01-23T17:57:20.237151429Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:57:20.236798 polkitd[2119]: Finished loading, compiling and executing 2 rules Jan 23 17:57:20.238459 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 17:57:20.244569 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 17:57:20.247240 polkitd[2119]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 17:57:20.253454 containerd[2018]: time="2026-01-23T17:57:20.252255853Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:57:20.255964 amazon-ssm-agent[2137]: 2026-01-23 17:57:19.7527 INFO Checking if agent identity type EC2 can be assumed Jan 23 17:57:20.327320 systemd-hostnamed[2026]: Hostname set to (transient) Jan 23 17:57:20.327485 systemd-resolved[1831]: System hostname changed to 'ip-172-31-28-159'. Jan 23 17:57:20.354810 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.1710 INFO Agent will take identity from EC2 Jan 23 17:57:20.459884 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.1780 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 17:57:20.563576 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.1780 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 17:57:20.665874 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.1780 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 17:57:20.722030 containerd[2018]: time="2026-01-23T17:57:20.721797063Z" level=info msg="Start subscribing containerd event" Jan 23 17:57:20.722030 containerd[2018]: time="2026-01-23T17:57:20.721989927Z" level=info msg="Start recovering state" Jan 23 17:57:20.722227 containerd[2018]: time="2026-01-23T17:57:20.722194923Z" level=info msg="Start event monitor" Jan 23 17:57:20.722300 containerd[2018]: time="2026-01-23T17:57:20.722268423Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:57:20.722300 containerd[2018]: time="2026-01-23T17:57:20.722291751Z" level=info msg="Start streaming server" Jan 23 17:57:20.722397 containerd[2018]: time="2026-01-23T17:57:20.722339535Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:57:20.722397 containerd[2018]: time="2026-01-23T17:57:20.722360511Z" level=info msg="runtime interface starting up..." Jan 23 17:57:20.722397 containerd[2018]: time="2026-01-23T17:57:20.722376531Z" level=info msg="starting plugins..." Jan 23 17:57:20.722517 containerd[2018]: time="2026-01-23T17:57:20.722440995Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:57:20.723092 containerd[2018]: time="2026-01-23T17:57:20.723050499Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:57:20.723393 containerd[2018]: time="2026-01-23T17:57:20.723353667Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:57:20.724119 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:57:20.728532 containerd[2018]: time="2026-01-23T17:57:20.727579791Z" level=info msg="containerd successfully booted in 0.607312s" Jan 23 17:57:20.764880 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.1780 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 17:57:20.863468 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.1834 INFO [Registrar] Starting registrar module Jan 23 17:57:20.964622 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.2185 INFO [EC2Identity] Checking disk for registration info Jan 23 17:57:20.999494 tar[2003]: linux-arm64/README.md Jan 23 17:57:21.030780 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:57:21.064971 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.2186 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 17:57:21.165291 amazon-ssm-agent[2137]: 2026-01-23 17:57:20.2186 INFO [EC2Identity] Generating registration keypair Jan 23 17:57:21.199059 sshd_keygen[2015]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:57:21.249489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:57:21.257331 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:57:21.263052 systemd[1]: Started sshd@0-172.31.28.159:22-68.220.241.50:43854.service - OpenSSH per-connection server daemon (68.220.241.50:43854). Jan 23 17:57:21.314823 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:57:21.317076 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:57:21.330025 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:57:21.378591 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:57:21.389139 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:57:21.397337 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 17:57:21.400443 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:57:21.837150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:21.841822 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:57:21.848021 systemd[1]: Startup finished in 3.821s (kernel) + 10.878s (initrd) + 9.346s (userspace) = 24.046s. Jan 23 17:57:21.855569 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:21.953203 sshd[2241]: Accepted publickey for core from 68.220.241.50 port 43854 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:57:21.958259 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:21.982977 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:57:21.987447 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:57:22.020007 systemd-logind[1983]: New session 1 of user core. Jan 23 17:57:22.033919 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:57:22.043881 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:57:22.068830 (systemd)[2263]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:57:22.081157 systemd-logind[1983]: New session c1 of user core. Jan 23 17:57:22.427455 systemd[2263]: Queued start job for default target default.target. Jan 23 17:57:22.433066 systemd[2263]: Created slice app.slice - User Application Slice. Jan 23 17:57:22.433298 systemd[2263]: Reached target paths.target - Paths. Jan 23 17:57:22.433395 systemd[2263]: Reached target timers.target - Timers. Jan 23 17:57:22.438026 systemd[2263]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:57:22.470340 systemd[2263]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:57:22.470580 systemd[2263]: Reached target sockets.target - Sockets. Jan 23 17:57:22.470680 systemd[2263]: Reached target basic.target - Basic System. Jan 23 17:57:22.470781 systemd[2263]: Reached target default.target - Main User Target. Jan 23 17:57:22.470847 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:57:22.470893 systemd[2263]: Startup finished in 367ms. Jan 23 17:57:22.480114 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:57:22.719037 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.7184 INFO [EC2Identity] Checking write access before registering Jan 23 17:57:22.771909 amazon-ssm-agent[2137]: 2026/01/23 17:57:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:22.771909 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:57:22.771909 amazon-ssm-agent[2137]: 2026/01/23 17:57:22 processing appconfig overrides Jan 23 17:57:22.802480 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.7204 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 17:57:22.802480 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.7708 INFO [EC2Identity] EC2 registration was successful. Jan 23 17:57:22.802634 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.7708 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 17:57:22.802634 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.7710 INFO [CredentialRefresher] credentialRefresher has started Jan 23 17:57:22.802634 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.7710 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 17:57:22.802634 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.8021 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 17:57:22.802634 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.8024 INFO [CredentialRefresher] Credentials ready Jan 23 17:57:22.820965 amazon-ssm-agent[2137]: 2026-01-23 17:57:22.8025 INFO [CredentialRefresher] Next credential rotation will be in 29.9999922476 minutes Jan 23 17:57:22.932899 systemd[1]: Started sshd@1-172.31.28.159:22-68.220.241.50:49084.service - OpenSSH per-connection server daemon (68.220.241.50:49084). Jan 23 17:57:23.061697 kubelet[2256]: E0123 17:57:23.061606 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:23.066408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:23.066724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:23.067451 systemd[1]: kubelet.service: Consumed 1.493s CPU time, 259.4M memory peak. Jan 23 17:57:23.455681 sshd[2279]: Accepted publickey for core from 68.220.241.50 port 49084 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:57:23.458173 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:23.466299 systemd-logind[1983]: New session 2 of user core. Jan 23 17:57:23.477118 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:57:23.810033 sshd[2283]: Connection closed by 68.220.241.50 port 49084 Jan 23 17:57:23.810807 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:23.817789 systemd[1]: sshd@1-172.31.28.159:22-68.220.241.50:49084.service: Deactivated successfully. Jan 23 17:57:23.824281 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:57:23.827288 systemd-logind[1983]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:57:23.830185 systemd-logind[1983]: Removed session 2. Jan 23 17:57:23.838335 amazon-ssm-agent[2137]: 2026-01-23 17:57:23.8382 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 17:57:23.919488 systemd[1]: Started sshd@2-172.31.28.159:22-68.220.241.50:49100.service - OpenSSH per-connection server daemon (68.220.241.50:49100). Jan 23 17:57:23.939691 amazon-ssm-agent[2137]: 2026-01-23 17:57:23.8423 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2290) started Jan 23 17:57:24.039800 amazon-ssm-agent[2137]: 2026-01-23 17:57:23.8423 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 17:57:24.504673 sshd[2295]: Accepted publickey for core from 68.220.241.50 port 49100 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:57:24.506434 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:24.514652 systemd-logind[1983]: New session 3 of user core. Jan 23 17:57:24.525091 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:57:24.874017 sshd[2305]: Connection closed by 68.220.241.50 port 49100 Jan 23 17:57:24.875509 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:24.881790 systemd[1]: sshd@2-172.31.28.159:22-68.220.241.50:49100.service: Deactivated successfully. Jan 23 17:57:24.885838 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:57:24.889432 systemd-logind[1983]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:57:24.892157 systemd-logind[1983]: Removed session 3. Jan 23 17:57:24.964296 systemd[1]: Started sshd@3-172.31.28.159:22-68.220.241.50:49106.service - OpenSSH per-connection server daemon (68.220.241.50:49106). Jan 23 17:57:25.485295 sshd[2311]: Accepted publickey for core from 68.220.241.50 port 49106 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:57:25.487589 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:25.494843 systemd-logind[1983]: New session 4 of user core. Jan 23 17:57:25.506144 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:57:25.839606 sshd[2314]: Connection closed by 68.220.241.50 port 49106 Jan 23 17:57:25.840376 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:25.848973 systemd-logind[1983]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:57:25.849480 systemd[1]: sshd@3-172.31.28.159:22-68.220.241.50:49106.service: Deactivated successfully. Jan 23 17:57:25.854066 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:57:25.857988 systemd-logind[1983]: Removed session 4. Jan 23 17:57:25.941964 systemd[1]: Started sshd@4-172.31.28.159:22-68.220.241.50:49116.service - OpenSSH per-connection server daemon (68.220.241.50:49116). Jan 23 17:57:26.493241 sshd[2320]: Accepted publickey for core from 68.220.241.50 port 49116 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:57:26.495482 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:26.503049 systemd-logind[1983]: New session 5 of user core. Jan 23 17:57:26.511199 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:57:26.824721 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:57:26.826114 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:26.857209 sudo[2324]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:26.940893 sshd[2323]: Connection closed by 68.220.241.50 port 49116 Jan 23 17:57:26.941017 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:26.947702 systemd-logind[1983]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:57:26.948747 systemd[1]: sshd@4-172.31.28.159:22-68.220.241.50:49116.service: Deactivated successfully. Jan 23 17:57:26.952331 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:57:26.957987 systemd-logind[1983]: Removed session 5. Jan 23 17:57:26.753494 systemd-resolved[1831]: Clock change detected. Flushing caches. Jan 23 17:57:26.760807 systemd-journald[1529]: Time jumped backwards, rotating. Jan 23 17:57:26.818903 systemd[1]: Started sshd@5-172.31.28.159:22-68.220.241.50:49130.service - OpenSSH per-connection server daemon (68.220.241.50:49130). Jan 23 17:57:27.341372 sshd[2331]: Accepted publickey for core from 68.220.241.50 port 49130 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:57:27.343758 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:27.351265 systemd-logind[1983]: New session 6 of user core. Jan 23 17:57:27.363448 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:57:27.618985 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:57:27.620453 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:27.627451 sudo[2336]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:27.637007 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:57:27.638084 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:27.655725 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:57:27.724275 augenrules[2358]: No rules Jan 23 17:57:27.727082 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:57:27.727662 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:57:27.730657 sudo[2335]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:27.807911 sshd[2334]: Connection closed by 68.220.241.50 port 49130 Jan 23 17:57:27.808381 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:27.815820 systemd-logind[1983]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:57:27.817690 systemd[1]: sshd@5-172.31.28.159:22-68.220.241.50:49130.service: Deactivated successfully. Jan 23 17:57:27.821854 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:57:27.825277 systemd-logind[1983]: Removed session 6. Jan 23 17:57:27.901622 systemd[1]: Started sshd@6-172.31.28.159:22-68.220.241.50:49132.service - OpenSSH per-connection server daemon (68.220.241.50:49132). Jan 23 17:57:28.421239 sshd[2367]: Accepted publickey for core from 68.220.241.50 port 49132 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:57:28.423162 sshd-session[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:28.431049 systemd-logind[1983]: New session 7 of user core. Jan 23 17:57:28.438451 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:57:28.698178 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:57:28.699496 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:29.410346 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:57:29.436676 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:57:29.984552 dockerd[2389]: time="2026-01-23T17:57:29.984442628Z" level=info msg="Starting up" Jan 23 17:57:29.986373 dockerd[2389]: time="2026-01-23T17:57:29.986287724Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:57:30.006501 dockerd[2389]: time="2026-01-23T17:57:30.006422944Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:57:30.042933 systemd[1]: var-lib-docker-metacopy\x2dcheck1132897602-merged.mount: Deactivated successfully. Jan 23 17:57:30.059269 dockerd[2389]: time="2026-01-23T17:57:30.059006068Z" level=info msg="Loading containers: start." Jan 23 17:57:30.072246 kernel: Initializing XFRM netlink socket Jan 23 17:57:30.414087 (udev-worker)[2411]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:57:30.487388 systemd-networkd[1830]: docker0: Link UP Jan 23 17:57:30.493541 dockerd[2389]: time="2026-01-23T17:57:30.492730938Z" level=info msg="Loading containers: done." Jan 23 17:57:30.518908 dockerd[2389]: time="2026-01-23T17:57:30.518827530Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:57:30.519070 dockerd[2389]: time="2026-01-23T17:57:30.518953902Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:57:30.519386 dockerd[2389]: time="2026-01-23T17:57:30.519104562Z" level=info msg="Initializing buildkit" Jan 23 17:57:30.558003 dockerd[2389]: time="2026-01-23T17:57:30.557947626Z" level=info msg="Completed buildkit initialization" Jan 23 17:57:30.574852 dockerd[2389]: time="2026-01-23T17:57:30.574754334Z" level=info msg="Daemon has completed initialization" Jan 23 17:57:30.575227 dockerd[2389]: time="2026-01-23T17:57:30.575035614Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:57:30.577712 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:57:31.027899 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3068080058-merged.mount: Deactivated successfully. Jan 23 17:57:31.732099 containerd[2018]: time="2026-01-23T17:57:31.732039752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 17:57:32.283714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509403399.mount: Deactivated successfully. Jan 23 17:57:33.005263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:57:33.009246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:33.440625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:33.455782 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:33.548682 kubelet[2664]: E0123 17:57:33.548606 2664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:33.555318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:33.555595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:33.558400 systemd[1]: kubelet.service: Consumed 340ms CPU time, 105M memory peak. Jan 23 17:57:33.991623 containerd[2018]: time="2026-01-23T17:57:33.991543931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:33.994960 containerd[2018]: time="2026-01-23T17:57:33.994893635Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 23 17:57:33.995831 containerd[2018]: time="2026-01-23T17:57:33.995774435Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:34.002045 containerd[2018]: time="2026-01-23T17:57:34.001962032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:34.005965 containerd[2018]: time="2026-01-23T17:57:34.004166288Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.271429432s" Jan 23 17:57:34.005965 containerd[2018]: time="2026-01-23T17:57:34.005086016Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 17:57:34.007564 containerd[2018]: time="2026-01-23T17:57:34.007513136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 17:57:35.778113 containerd[2018]: time="2026-01-23T17:57:35.776294064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:35.778113 containerd[2018]: time="2026-01-23T17:57:35.778055592Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 23 17:57:35.779048 containerd[2018]: time="2026-01-23T17:57:35.778996584Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:35.784898 containerd[2018]: time="2026-01-23T17:57:35.784817484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:35.787030 containerd[2018]: time="2026-01-23T17:57:35.786959688Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.779235328s" Jan 23 17:57:35.787030 containerd[2018]: time="2026-01-23T17:57:35.787025172Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 17:57:35.787890 containerd[2018]: time="2026-01-23T17:57:35.787687116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 17:57:37.296850 containerd[2018]: time="2026-01-23T17:57:37.296797836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:37.298576 containerd[2018]: time="2026-01-23T17:57:37.298505916Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 23 17:57:37.300078 containerd[2018]: time="2026-01-23T17:57:37.300010956Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:37.306206 containerd[2018]: time="2026-01-23T17:57:37.305426064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:37.307504 containerd[2018]: time="2026-01-23T17:57:37.307438836Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.519692812s" Jan 23 17:57:37.307668 containerd[2018]: time="2026-01-23T17:57:37.307640868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 17:57:37.308894 containerd[2018]: time="2026-01-23T17:57:37.308814684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 17:57:38.538355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646619486.mount: Deactivated successfully. Jan 23 17:57:39.071305 containerd[2018]: time="2026-01-23T17:57:39.071249209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:39.073468 containerd[2018]: time="2026-01-23T17:57:39.073416553Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 17:57:39.074809 containerd[2018]: time="2026-01-23T17:57:39.074427337Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:39.077310 containerd[2018]: time="2026-01-23T17:57:39.077258485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:39.078545 containerd[2018]: time="2026-01-23T17:57:39.078482449Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.769407173s" Jan 23 17:57:39.078545 containerd[2018]: time="2026-01-23T17:57:39.078540889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 17:57:39.079293 containerd[2018]: time="2026-01-23T17:57:39.079107409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 17:57:39.559138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749514310.mount: Deactivated successfully. Jan 23 17:57:40.788512 containerd[2018]: time="2026-01-23T17:57:40.788430833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:40.790930 containerd[2018]: time="2026-01-23T17:57:40.790478861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 23 17:57:40.792149 containerd[2018]: time="2026-01-23T17:57:40.792089249Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:40.797007 containerd[2018]: time="2026-01-23T17:57:40.796943513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:40.799288 containerd[2018]: time="2026-01-23T17:57:40.799236401Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.719774032s" Jan 23 17:57:40.799457 containerd[2018]: time="2026-01-23T17:57:40.799427765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 17:57:40.800255 containerd[2018]: time="2026-01-23T17:57:40.800095277Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:57:41.244219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721244014.mount: Deactivated successfully. Jan 23 17:57:41.253215 containerd[2018]: time="2026-01-23T17:57:41.252171760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:41.253683 containerd[2018]: time="2026-01-23T17:57:41.253642516Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 17:57:41.254220 containerd[2018]: time="2026-01-23T17:57:41.254160676Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:41.258878 containerd[2018]: time="2026-01-23T17:57:41.258803068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:41.261938 containerd[2018]: time="2026-01-23T17:57:41.261625624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 461.249115ms" Jan 23 17:57:41.261938 containerd[2018]: time="2026-01-23T17:57:41.261674032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:57:41.262577 containerd[2018]: time="2026-01-23T17:57:41.262542244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 17:57:41.807111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823553764.mount: Deactivated successfully. Jan 23 17:57:43.755623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:57:43.761320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:44.594038 containerd[2018]: time="2026-01-23T17:57:44.592430948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:44.594588 containerd[2018]: time="2026-01-23T17:57:44.594229628Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 23 17:57:44.595564 containerd[2018]: time="2026-01-23T17:57:44.595499156Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:44.603684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:44.607228 containerd[2018]: time="2026-01-23T17:57:44.607131560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:44.610223 containerd[2018]: time="2026-01-23T17:57:44.610001600Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.347155156s" Jan 23 17:57:44.610223 containerd[2018]: time="2026-01-23T17:57:44.610067216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 17:57:44.620746 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:44.715028 kubelet[2808]: E0123 17:57:44.714968 2808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:44.722667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:44.722987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:44.726097 systemd[1]: kubelet.service: Consumed 326ms CPU time, 104.7M memory peak. Jan 23 17:57:50.120349 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 17:57:51.585815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:51.586147 systemd[1]: kubelet.service: Consumed 326ms CPU time, 104.7M memory peak. Jan 23 17:57:51.590374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:51.643547 systemd[1]: Reload requested from client PID 2845 ('systemctl') (unit session-7.scope)... Jan 23 17:57:51.643744 systemd[1]: Reloading... Jan 23 17:57:51.922324 zram_generator::config[2898]: No configuration found. Jan 23 17:57:52.373407 systemd[1]: Reloading finished in 728 ms. Jan 23 17:57:52.482343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:52.487677 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:57:52.488237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:52.488327 systemd[1]: kubelet.service: Consumed 238ms CPU time, 95M memory peak. Jan 23 17:57:52.492413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:52.907491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:52.924009 (kubelet)[2954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:57:52.996246 kubelet[2954]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:52.997231 kubelet[2954]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:57:52.997231 kubelet[2954]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:52.997231 kubelet[2954]: I0123 17:57:52.996911 2954 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:57:54.896080 kubelet[2954]: I0123 17:57:54.896013 2954 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:57:54.896080 kubelet[2954]: I0123 17:57:54.896059 2954 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:57:54.896834 kubelet[2954]: I0123 17:57:54.896498 2954 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:57:54.954879 kubelet[2954]: E0123 17:57:54.954400 2954 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:57:54.956326 kubelet[2954]: I0123 17:57:54.956098 2954 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:57:54.970574 kubelet[2954]: I0123 17:57:54.970526 2954 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:57:54.975995 kubelet[2954]: I0123 17:57:54.975944 2954 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:57:54.976629 kubelet[2954]: I0123 17:57:54.976577 2954 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:57:54.976884 kubelet[2954]: I0123 17:57:54.976629 2954 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:57:54.977047 kubelet[2954]: I0123 17:57:54.977021 2954 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:57:54.977047 kubelet[2954]: I0123 17:57:54.977044 2954 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:57:54.978813 kubelet[2954]: I0123 17:57:54.978761 2954 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:54.986018 kubelet[2954]: I0123 17:57:54.985817 2954 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:57:54.986018 kubelet[2954]: I0123 17:57:54.985858 2954 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:57:54.986018 kubelet[2954]: I0123 17:57:54.985905 2954 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:57:54.988306 kubelet[2954]: I0123 17:57:54.988274 2954 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:57:54.989756 kubelet[2954]: E0123 17:57:54.989596 2954 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-159&limit=500&resourceVersion=0\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:57:54.991092 kubelet[2954]: E0123 17:57:54.990696 2954 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:57:54.991483 kubelet[2954]: I0123 17:57:54.991454 2954 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:57:54.992783 kubelet[2954]: I0123 17:57:54.992749 2954 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:57:54.993103 kubelet[2954]: W0123 17:57:54.993083 2954 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:57:55.001959 kubelet[2954]: I0123 17:57:55.001927 2954 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:57:55.002275 kubelet[2954]: I0123 17:57:55.002254 2954 server.go:1289] "Started kubelet" Jan 23 17:57:55.004450 kubelet[2954]: I0123 17:57:55.004371 2954 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:57:55.006280 kubelet[2954]: I0123 17:57:55.005986 2954 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:57:55.008066 kubelet[2954]: I0123 17:57:55.007977 2954 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:57:55.013239 kubelet[2954]: I0123 17:57:55.011648 2954 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:57:55.015971 kubelet[2954]: E0123 17:57:55.013657 2954 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.159:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.159:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-159.188d6de8bd4bbacc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-159,UID:ip-172-31-28-159,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-159,},FirstTimestamp:2026-01-23 17:57:55.00217006 +0000 UTC m=+2.070585323,LastTimestamp:2026-01-23 17:57:55.00217006 +0000 UTC m=+2.070585323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-159,}" Jan 23 17:57:55.016969 kubelet[2954]: I0123 17:57:55.016910 2954 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:57:55.018431 kubelet[2954]: I0123 17:57:55.018391 2954 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:57:55.031785 kubelet[2954]: I0123 17:57:55.031745 2954 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:57:55.032635 kubelet[2954]: E0123 17:57:55.032590 2954 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-159\" not found" Jan 23 17:57:55.034270 kubelet[2954]: I0123 17:57:55.034234 2954 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:57:55.040520 kubelet[2954]: I0123 17:57:55.035054 2954 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:57:55.040520 kubelet[2954]: E0123 17:57:55.036016 2954 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:57:55.040520 kubelet[2954]: E0123 17:57:55.036138 2954 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-159?timeout=10s\": dial tcp 172.31.28.159:6443: connect: connection refused" interval="200ms" Jan 23 17:57:55.040520 kubelet[2954]: I0123 17:57:55.039124 2954 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:57:55.040520 kubelet[2954]: I0123 17:57:55.039480 2954 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:57:55.042673 kubelet[2954]: E0123 17:57:55.042637 2954 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:57:55.043164 kubelet[2954]: I0123 17:57:55.043141 2954 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:57:55.067158 kubelet[2954]: I0123 17:57:55.067099 2954 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:57:55.072661 kubelet[2954]: I0123 17:57:55.072606 2954 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:57:55.076643 kubelet[2954]: I0123 17:57:55.076586 2954 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:57:55.076643 kubelet[2954]: I0123 17:57:55.076649 2954 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:57:55.076817 kubelet[2954]: I0123 17:57:55.076666 2954 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:57:55.076817 kubelet[2954]: E0123 17:57:55.076739 2954 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:57:55.083433 kubelet[2954]: E0123 17:57:55.083385 2954 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:57:55.088419 kubelet[2954]: I0123 17:57:55.088384 2954 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:57:55.088419 kubelet[2954]: I0123 17:57:55.088415 2954 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:57:55.088771 kubelet[2954]: I0123 17:57:55.088449 2954 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:55.090882 kubelet[2954]: I0123 17:57:55.090843 2954 policy_none.go:49] "None policy: Start" Jan 23 17:57:55.090882 kubelet[2954]: I0123 17:57:55.090885 2954 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:57:55.091055 kubelet[2954]: I0123 17:57:55.090909 2954 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:57:55.100416 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:57:55.117762 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:57:55.124923 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:57:55.134499 kubelet[2954]: E0123 17:57:55.134327 2954 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-159\" not found" Jan 23 17:57:55.135476 kubelet[2954]: E0123 17:57:55.135148 2954 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:57:55.136334 kubelet[2954]: I0123 17:57:55.136299 2954 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:57:55.136441 kubelet[2954]: I0123 17:57:55.136333 2954 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:57:55.136747 kubelet[2954]: I0123 17:57:55.136709 2954 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:57:55.141771 kubelet[2954]: E0123 17:57:55.141723 2954 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:57:55.141882 kubelet[2954]: E0123 17:57:55.141798 2954 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-159\" not found" Jan 23 17:57:55.199890 systemd[1]: Created slice kubepods-burstable-pod5c61f09188af28640d1c5fb58d6bf1df.slice - libcontainer container kubepods-burstable-pod5c61f09188af28640d1c5fb58d6bf1df.slice. Jan 23 17:57:55.230552 kubelet[2954]: E0123 17:57:55.230496 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:55.237432 systemd[1]: Created slice kubepods-burstable-pod38c40454fdfb9291affedb4a23dcadbe.slice - libcontainer container kubepods-burstable-pod38c40454fdfb9291affedb4a23dcadbe.slice. Jan 23 17:57:55.240044 kubelet[2954]: I0123 17:57:55.239457 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c61f09188af28640d1c5fb58d6bf1df-ca-certs\") pod \"kube-apiserver-ip-172-31-28-159\" (UID: \"5c61f09188af28640d1c5fb58d6bf1df\") " pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:57:55.240044 kubelet[2954]: I0123 17:57:55.239527 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:57:55.240044 kubelet[2954]: I0123 17:57:55.239569 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:57:55.240044 kubelet[2954]: I0123 17:57:55.239606 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:57:55.240044 kubelet[2954]: I0123 17:57:55.239645 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60ea5260e15ae5662acb370f0ea100d8-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-159\" (UID: \"60ea5260e15ae5662acb370f0ea100d8\") " pod="kube-system/kube-scheduler-ip-172-31-28-159" Jan 23 17:57:55.240396 kubelet[2954]: I0123 17:57:55.239685 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c61f09188af28640d1c5fb58d6bf1df-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-159\" (UID: \"5c61f09188af28640d1c5fb58d6bf1df\") " pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:57:55.240396 kubelet[2954]: I0123 17:57:55.239719 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c61f09188af28640d1c5fb58d6bf1df-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-159\" (UID: \"5c61f09188af28640d1c5fb58d6bf1df\") " pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:57:55.240396 kubelet[2954]: I0123 17:57:55.239798 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:57:55.240396 kubelet[2954]: I0123 17:57:55.239835 2954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:57:55.240733 kubelet[2954]: E0123 17:57:55.240656 2954 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-159?timeout=10s\": dial tcp 172.31.28.159:6443: connect: connection refused" interval="400ms" Jan 23 17:57:55.246224 kubelet[2954]: I0123 17:57:55.244609 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-159" Jan 23 17:57:55.246224 kubelet[2954]: E0123 17:57:55.245444 2954 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.159:6443/api/v1/nodes\": dial tcp 172.31.28.159:6443: connect: connection refused" node="ip-172-31-28-159" Jan 23 17:57:55.246224 kubelet[2954]: E0123 17:57:55.245624 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:55.249491 systemd[1]: Created slice kubepods-burstable-pod60ea5260e15ae5662acb370f0ea100d8.slice - libcontainer container kubepods-burstable-pod60ea5260e15ae5662acb370f0ea100d8.slice. Jan 23 17:57:55.254577 kubelet[2954]: E0123 17:57:55.254519 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:55.448799 kubelet[2954]: I0123 17:57:55.448760 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-159" Jan 23 17:57:55.449361 kubelet[2954]: E0123 17:57:55.449298 2954 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.159:6443/api/v1/nodes\": dial tcp 172.31.28.159:6443: connect: connection refused" node="ip-172-31-28-159" Jan 23 17:57:55.534030 containerd[2018]: time="2026-01-23T17:57:55.533974086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-159,Uid:5c61f09188af28640d1c5fb58d6bf1df,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:55.547770 containerd[2018]: time="2026-01-23T17:57:55.547376911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-159,Uid:38c40454fdfb9291affedb4a23dcadbe,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:55.561066 containerd[2018]: time="2026-01-23T17:57:55.560805667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-159,Uid:60ea5260e15ae5662acb370f0ea100d8,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:55.582429 containerd[2018]: time="2026-01-23T17:57:55.582366223Z" level=info msg="connecting to shim 1cb676b01f2c89fd84038dd7b73e6711388ae9713fd8cb3fa02bdf3f5164b188" address="unix:///run/containerd/s/72ba87f8159dbc85485d74a9ec3f9435c2ca61ddec00edda7e2664d4c10255d6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:55.614366 containerd[2018]: time="2026-01-23T17:57:55.614269075Z" level=info msg="connecting to shim be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab" address="unix:///run/containerd/s/9f9c4bb8c5765282b16da1fcf6ce4e992d775c2038a95316930462986f03c34b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:55.630711 containerd[2018]: time="2026-01-23T17:57:55.630655195Z" level=info msg="connecting to shim b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f" address="unix:///run/containerd/s/314a039c93d48bbc40fa82885827050be3c4d03f94bd296a8019e9af48971a91" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:55.643363 kubelet[2954]: E0123 17:57:55.643300 2954 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-159?timeout=10s\": dial tcp 172.31.28.159:6443: connect: connection refused" interval="800ms" Jan 23 17:57:55.689806 systemd[1]: Started cri-containerd-1cb676b01f2c89fd84038dd7b73e6711388ae9713fd8cb3fa02bdf3f5164b188.scope - libcontainer container 1cb676b01f2c89fd84038dd7b73e6711388ae9713fd8cb3fa02bdf3f5164b188. Jan 23 17:57:55.705563 systemd[1]: Started cri-containerd-be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab.scope - libcontainer container be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab. Jan 23 17:57:55.724529 systemd[1]: Started cri-containerd-b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f.scope - libcontainer container b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f. Jan 23 17:57:55.822144 containerd[2018]: time="2026-01-23T17:57:55.821991260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-159,Uid:5c61f09188af28640d1c5fb58d6bf1df,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cb676b01f2c89fd84038dd7b73e6711388ae9713fd8cb3fa02bdf3f5164b188\"" Jan 23 17:57:55.837479 containerd[2018]: time="2026-01-23T17:57:55.837428168Z" level=info msg="CreateContainer within sandbox \"1cb676b01f2c89fd84038dd7b73e6711388ae9713fd8cb3fa02bdf3f5164b188\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:57:55.850374 containerd[2018]: time="2026-01-23T17:57:55.850320440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-159,Uid:38c40454fdfb9291affedb4a23dcadbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab\"" Jan 23 17:57:55.855486 kubelet[2954]: I0123 17:57:55.855451 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-159" Jan 23 17:57:55.858472 kubelet[2954]: E0123 17:57:55.858398 2954 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.159:6443/api/v1/nodes\": dial tcp 172.31.28.159:6443: connect: connection refused" node="ip-172-31-28-159" Jan 23 17:57:55.862885 containerd[2018]: time="2026-01-23T17:57:55.862750460Z" level=info msg="CreateContainer within sandbox \"be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:57:55.867209 containerd[2018]: time="2026-01-23T17:57:55.866165972Z" level=info msg="Container 6c54ca219aeea2e89504341d24471c6eca1ac296512201e7e3dbe763b70eee82: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:55.885199 containerd[2018]: time="2026-01-23T17:57:55.885126440Z" level=info msg="CreateContainer within sandbox \"1cb676b01f2c89fd84038dd7b73e6711388ae9713fd8cb3fa02bdf3f5164b188\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6c54ca219aeea2e89504341d24471c6eca1ac296512201e7e3dbe763b70eee82\"" Jan 23 17:57:55.886660 containerd[2018]: time="2026-01-23T17:57:55.886587428Z" level=info msg="Container 755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:55.888885 containerd[2018]: time="2026-01-23T17:57:55.888622292Z" level=info msg="StartContainer for \"6c54ca219aeea2e89504341d24471c6eca1ac296512201e7e3dbe763b70eee82\"" Jan 23 17:57:55.895144 containerd[2018]: time="2026-01-23T17:57:55.895091612Z" level=info msg="connecting to shim 6c54ca219aeea2e89504341d24471c6eca1ac296512201e7e3dbe763b70eee82" address="unix:///run/containerd/s/72ba87f8159dbc85485d74a9ec3f9435c2ca61ddec00edda7e2664d4c10255d6" protocol=ttrpc version=3 Jan 23 17:57:55.906117 containerd[2018]: time="2026-01-23T17:57:55.904156772Z" level=info msg="CreateContainer within sandbox \"be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f\"" Jan 23 17:57:55.908606 containerd[2018]: time="2026-01-23T17:57:55.908163248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-159,Uid:60ea5260e15ae5662acb370f0ea100d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f\"" Jan 23 17:57:55.909145 containerd[2018]: time="2026-01-23T17:57:55.908380748Z" level=info msg="StartContainer for \"755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f\"" Jan 23 17:57:55.912982 containerd[2018]: time="2026-01-23T17:57:55.912871268Z" level=info msg="connecting to shim 755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f" address="unix:///run/containerd/s/9f9c4bb8c5765282b16da1fcf6ce4e992d775c2038a95316930462986f03c34b" protocol=ttrpc version=3 Jan 23 17:57:55.916222 containerd[2018]: time="2026-01-23T17:57:55.915459920Z" level=info msg="CreateContainer within sandbox \"b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:57:55.927844 containerd[2018]: time="2026-01-23T17:57:55.927791108Z" level=info msg="Container 76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:55.940716 containerd[2018]: time="2026-01-23T17:57:55.940661096Z" level=info msg="CreateContainer within sandbox \"b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340\"" Jan 23 17:57:55.942255 containerd[2018]: time="2026-01-23T17:57:55.942208160Z" level=info msg="StartContainer for \"76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340\"" Jan 23 17:57:55.948402 containerd[2018]: time="2026-01-23T17:57:55.948315597Z" level=info msg="connecting to shim 76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340" address="unix:///run/containerd/s/314a039c93d48bbc40fa82885827050be3c4d03f94bd296a8019e9af48971a91" protocol=ttrpc version=3 Jan 23 17:57:55.960656 systemd[1]: Started cri-containerd-6c54ca219aeea2e89504341d24471c6eca1ac296512201e7e3dbe763b70eee82.scope - libcontainer container 6c54ca219aeea2e89504341d24471c6eca1ac296512201e7e3dbe763b70eee82. Jan 23 17:57:55.972592 systemd[1]: Started cri-containerd-755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f.scope - libcontainer container 755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f. Jan 23 17:57:56.027534 systemd[1]: Started cri-containerd-76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340.scope - libcontainer container 76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340. Jan 23 17:57:56.075958 kubelet[2954]: E0123 17:57:56.071902 2954 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:57:56.138766 kubelet[2954]: E0123 17:57:56.138596 2954 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:57:56.183301 containerd[2018]: time="2026-01-23T17:57:56.183171318Z" level=info msg="StartContainer for \"6c54ca219aeea2e89504341d24471c6eca1ac296512201e7e3dbe763b70eee82\" returns successfully" Jan 23 17:57:56.208547 containerd[2018]: time="2026-01-23T17:57:56.208404294Z" level=info msg="StartContainer for \"755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f\" returns successfully" Jan 23 17:57:56.221054 containerd[2018]: time="2026-01-23T17:57:56.220989438Z" level=info msg="StartContainer for \"76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340\" returns successfully" Jan 23 17:57:56.322359 kubelet[2954]: E0123 17:57:56.322266 2954 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:57:56.662231 kubelet[2954]: I0123 17:57:56.662071 2954 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-159" Jan 23 17:57:57.171236 kubelet[2954]: E0123 17:57:57.170264 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:57.180519 kubelet[2954]: E0123 17:57:57.180450 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:57.184719 kubelet[2954]: E0123 17:57:57.184681 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:58.187322 kubelet[2954]: E0123 17:57:58.185975 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:58.188161 kubelet[2954]: E0123 17:57:58.187914 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:58.188509 kubelet[2954]: E0123 17:57:58.188471 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:59.188893 kubelet[2954]: E0123 17:57:59.188840 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:59.192218 kubelet[2954]: E0123 17:57:59.189687 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:59.192428 kubelet[2954]: E0123 17:57:59.191946 2954 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:57:59.994734 kubelet[2954]: I0123 17:57:59.994399 2954 apiserver.go:52] "Watching apiserver" Jan 23 17:58:00.002476 kubelet[2954]: E0123 17:58:00.002433 2954 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-159\" not found" node="ip-172-31-28-159" Jan 23 17:58:00.039426 kubelet[2954]: I0123 17:58:00.039224 2954 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:58:00.071152 kubelet[2954]: E0123 17:58:00.070813 2954 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-159.188d6de8bd4bbacc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-159,UID:ip-172-31-28-159,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-159,},FirstTimestamp:2026-01-23 17:57:55.00217006 +0000 UTC m=+2.070585323,LastTimestamp:2026-01-23 17:57:55.00217006 +0000 UTC m=+2.070585323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-159,}" Jan 23 17:58:00.175796 kubelet[2954]: I0123 17:58:00.175737 2954 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-159" Jan 23 17:58:00.176062 kubelet[2954]: E0123 17:58:00.176011 2954 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-159\": node \"ip-172-31-28-159\" not found" Jan 23 17:58:00.236224 kubelet[2954]: I0123 17:58:00.234512 2954 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:00.271953 kubelet[2954]: E0123 17:58:00.271895 2954 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:00.271953 kubelet[2954]: I0123 17:58:00.271944 2954 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:00.281138 kubelet[2954]: E0123 17:58:00.280811 2954 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:00.282366 kubelet[2954]: I0123 17:58:00.282300 2954 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-159" Jan 23 17:58:00.286692 kubelet[2954]: E0123 17:58:00.286631 2954 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-159" Jan 23 17:58:00.291213 kubelet[2954]: I0123 17:58:00.290241 2954 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:00.301637 kubelet[2954]: E0123 17:58:00.301557 2954 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:03.483024 systemd[1]: Reload requested from client PID 3234 ('systemctl') (unit session-7.scope)... Jan 23 17:58:03.483054 systemd[1]: Reloading... Jan 23 17:58:03.591308 update_engine[1988]: I20260123 17:58:03.591238 1988 update_attempter.cc:509] Updating boot flags... Jan 23 17:58:03.726219 zram_generator::config[3289]: No configuration found. Jan 23 17:58:04.554239 systemd[1]: Reloading finished in 1070 ms. Jan 23 17:58:04.728794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:04.768134 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:58:04.768840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:04.770301 systemd[1]: kubelet.service: Consumed 2.876s CPU time, 131.1M memory peak. Jan 23 17:58:04.775749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:05.154436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:05.174769 (kubelet)[3522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:58:05.285396 kubelet[3522]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:05.287354 kubelet[3522]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:58:05.287973 kubelet[3522]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:05.288473 kubelet[3522]: I0123 17:58:05.288362 3522 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:58:05.304580 kubelet[3522]: I0123 17:58:05.304521 3522 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:58:05.304580 kubelet[3522]: I0123 17:58:05.304570 3522 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:58:05.305233 kubelet[3522]: I0123 17:58:05.304987 3522 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:58:05.309322 kubelet[3522]: I0123 17:58:05.309268 3522 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:58:05.317321 kubelet[3522]: I0123 17:58:05.317241 3522 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:58:05.339570 kubelet[3522]: I0123 17:58:05.338650 3522 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:58:05.352552 kubelet[3522]: I0123 17:58:05.352489 3522 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:58:05.354504 kubelet[3522]: I0123 17:58:05.354425 3522 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:58:05.354760 kubelet[3522]: I0123 17:58:05.354490 3522 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:58:05.354897 kubelet[3522]: I0123 17:58:05.354765 3522 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:58:05.354897 kubelet[3522]: I0123 17:58:05.354787 3522 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:58:05.354897 kubelet[3522]: I0123 17:58:05.354860 3522 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:05.355158 kubelet[3522]: I0123 17:58:05.355129 3522 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:58:05.356201 kubelet[3522]: I0123 17:58:05.355165 3522 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:58:05.356201 kubelet[3522]: I0123 17:58:05.355723 3522 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:58:05.356201 kubelet[3522]: I0123 17:58:05.355767 3522 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:58:05.364319 kubelet[3522]: I0123 17:58:05.364142 3522 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:58:05.365142 kubelet[3522]: I0123 17:58:05.365093 3522 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:58:05.374712 kubelet[3522]: I0123 17:58:05.372964 3522 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:58:05.374712 kubelet[3522]: I0123 17:58:05.373041 3522 server.go:1289] "Started kubelet" Jan 23 17:58:05.374885 sudo[3537]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 17:58:05.376447 sudo[3537]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 17:58:05.382636 kubelet[3522]: I0123 17:58:05.382399 3522 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:58:05.393287 kubelet[3522]: I0123 17:58:05.392519 3522 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:58:05.396604 kubelet[3522]: I0123 17:58:05.394263 3522 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:58:05.401879 kubelet[3522]: I0123 17:58:05.400791 3522 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:58:05.401879 kubelet[3522]: I0123 17:58:05.401140 3522 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:58:05.401879 kubelet[3522]: I0123 17:58:05.401803 3522 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:58:05.406400 kubelet[3522]: I0123 17:58:05.404961 3522 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:58:05.406970 kubelet[3522]: E0123 17:58:05.406405 3522 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-159\" not found" Jan 23 17:58:05.411556 kubelet[3522]: I0123 17:58:05.410785 3522 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:58:05.411556 kubelet[3522]: I0123 17:58:05.411010 3522 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:58:05.494632 kubelet[3522]: I0123 17:58:05.494562 3522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:58:05.501065 kubelet[3522]: I0123 17:58:05.500825 3522 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:58:05.501065 kubelet[3522]: I0123 17:58:05.500868 3522 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:58:05.501065 kubelet[3522]: I0123 17:58:05.501013 3522 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:58:05.503777 kubelet[3522]: I0123 17:58:05.503368 3522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:58:05.503777 kubelet[3522]: I0123 17:58:05.503412 3522 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:58:05.503777 kubelet[3522]: I0123 17:58:05.503458 3522 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:58:05.503777 kubelet[3522]: I0123 17:58:05.503473 3522 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:58:05.503777 kubelet[3522]: E0123 17:58:05.503549 3522 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:58:05.527379 kubelet[3522]: E0123 17:58:05.527303 3522 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:58:05.603747 kubelet[3522]: E0123 17:58:05.603673 3522 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 17:58:05.692616 kubelet[3522]: I0123 17:58:05.692485 3522 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:58:05.693312 kubelet[3522]: I0123 17:58:05.692883 3522 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:58:05.693312 kubelet[3522]: I0123 17:58:05.692929 3522 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:05.693661 kubelet[3522]: I0123 17:58:05.693142 3522 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:58:05.693661 kubelet[3522]: I0123 17:58:05.693483 3522 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:58:05.693661 kubelet[3522]: I0123 17:58:05.693535 3522 policy_none.go:49] "None policy: Start" Jan 23 17:58:05.693661 kubelet[3522]: I0123 17:58:05.693556 3522 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:58:05.694283 kubelet[3522]: I0123 17:58:05.694007 3522 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:58:05.694457 kubelet[3522]: I0123 17:58:05.694432 3522 state_mem.go:75] "Updated machine memory state" Jan 23 17:58:05.705730 kubelet[3522]: E0123 17:58:05.704311 3522 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:58:05.705730 kubelet[3522]: I0123 17:58:05.704992 3522 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:58:05.705730 kubelet[3522]: I0123 17:58:05.705015 3522 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:58:05.709678 kubelet[3522]: I0123 17:58:05.708922 3522 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:58:05.718970 kubelet[3522]: E0123 17:58:05.718364 3522 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:58:05.805377 kubelet[3522]: I0123 17:58:05.805022 3522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:05.807056 kubelet[3522]: I0123 17:58:05.806260 3522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:05.808227 kubelet[3522]: I0123 17:58:05.806793 3522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-159" Jan 23 17:58:05.817009 kubelet[3522]: I0123 17:58:05.814007 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:05.817009 kubelet[3522]: I0123 17:58:05.814087 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:05.817009 kubelet[3522]: I0123 17:58:05.814128 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:05.817009 kubelet[3522]: I0123 17:58:05.814169 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60ea5260e15ae5662acb370f0ea100d8-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-159\" (UID: \"60ea5260e15ae5662acb370f0ea100d8\") " pod="kube-system/kube-scheduler-ip-172-31-28-159" Jan 23 17:58:05.817009 kubelet[3522]: I0123 17:58:05.814232 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:05.817421 kubelet[3522]: I0123 17:58:05.814272 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38c40454fdfb9291affedb4a23dcadbe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-159\" (UID: \"38c40454fdfb9291affedb4a23dcadbe\") " pod="kube-system/kube-controller-manager-ip-172-31-28-159" Jan 23 17:58:05.817421 kubelet[3522]: I0123 17:58:05.814314 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c61f09188af28640d1c5fb58d6bf1df-ca-certs\") pod \"kube-apiserver-ip-172-31-28-159\" (UID: \"5c61f09188af28640d1c5fb58d6bf1df\") " pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:05.817421 kubelet[3522]: I0123 17:58:05.814352 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c61f09188af28640d1c5fb58d6bf1df-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-159\" (UID: \"5c61f09188af28640d1c5fb58d6bf1df\") " pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:05.817421 kubelet[3522]: I0123 17:58:05.814388 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c61f09188af28640d1c5fb58d6bf1df-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-159\" (UID: \"5c61f09188af28640d1c5fb58d6bf1df\") " pod="kube-system/kube-apiserver-ip-172-31-28-159" Jan 23 17:58:05.831258 kubelet[3522]: I0123 17:58:05.831153 3522 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-159" Jan 23 17:58:05.857997 kubelet[3522]: I0123 17:58:05.857942 3522 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-159" Jan 23 17:58:05.858160 kubelet[3522]: I0123 17:58:05.858063 3522 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-159" Jan 23 17:58:06.152670 sudo[3537]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:06.357762 kubelet[3522]: I0123 17:58:06.357494 3522 apiserver.go:52] "Watching apiserver" Jan 23 17:58:06.412029 kubelet[3522]: I0123 17:58:06.411822 3522 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:58:06.652227 kubelet[3522]: I0123 17:58:06.650414 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-159" podStartSLOduration=1.650390274 podStartE2EDuration="1.650390274s" podCreationTimestamp="2026-01-23 17:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:06.650357754 +0000 UTC m=+1.463478549" watchObservedRunningTime="2026-01-23 17:58:06.650390274 +0000 UTC m=+1.463511081" Jan 23 17:58:06.688658 kubelet[3522]: I0123 17:58:06.688465 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-159" podStartSLOduration=1.688443414 podStartE2EDuration="1.688443414s" podCreationTimestamp="2026-01-23 17:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:06.66841665 +0000 UTC m=+1.481537421" watchObservedRunningTime="2026-01-23 17:58:06.688443414 +0000 UTC m=+1.501564209" Jan 23 17:58:06.719876 kubelet[3522]: I0123 17:58:06.719789 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-159" podStartSLOduration=1.719769126 podStartE2EDuration="1.719769126s" podCreationTimestamp="2026-01-23 17:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:06.69009897 +0000 UTC m=+1.503219753" watchObservedRunningTime="2026-01-23 17:58:06.719769126 +0000 UTC m=+1.532889909" Jan 23 17:58:08.856887 kubelet[3522]: I0123 17:58:08.856831 3522 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:58:08.857875 containerd[2018]: time="2026-01-23T17:58:08.857817357Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:58:08.860559 kubelet[3522]: I0123 17:58:08.859853 3522 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:58:09.493148 sudo[2371]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:09.570234 sshd[2370]: Connection closed by 68.220.241.50 port 49132 Jan 23 17:58:09.571470 sshd-session[2367]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:09.582061 systemd[1]: sshd@6-172.31.28.159:22-68.220.241.50:49132.service: Deactivated successfully. Jan 23 17:58:09.591114 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:58:09.593437 systemd[1]: session-7.scope: Consumed 11.018s CPU time, 265.9M memory peak. Jan 23 17:58:09.603058 systemd-logind[1983]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:58:09.608210 systemd-logind[1983]: Removed session 7. Jan 23 17:58:09.894127 systemd[1]: Created slice kubepods-besteffort-pod6e3da9a3_4054_4c15_97c7_e07ca45d41b5.slice - libcontainer container kubepods-besteffort-pod6e3da9a3_4054_4c15_97c7_e07ca45d41b5.slice. Jan 23 17:58:09.930125 systemd[1]: Created slice kubepods-burstable-podab6a6388_c9ad_4a5f_b211_144c970915f9.slice - libcontainer container kubepods-burstable-podab6a6388_c9ad_4a5f_b211_144c970915f9.slice. Jan 23 17:58:09.955342 kubelet[3522]: I0123 17:58:09.955260 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-cgroup\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.955880 kubelet[3522]: I0123 17:58:09.955390 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-lib-modules\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956456 kubelet[3522]: I0123 17:58:09.955439 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab6a6388-c9ad-4a5f-b211-144c970915f9-clustermesh-secrets\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956456 kubelet[3522]: I0123 17:58:09.956342 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-config-path\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956456 kubelet[3522]: I0123 17:58:09.956397 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-kernel\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956938 kubelet[3522]: I0123 17:58:09.956512 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-hostproc\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956938 kubelet[3522]: I0123 17:58:09.956561 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-etc-cni-netd\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956938 kubelet[3522]: I0123 17:58:09.956636 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-xtables-lock\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956938 kubelet[3522]: I0123 17:58:09.956690 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srgrv\" (UniqueName: \"kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-kube-api-access-srgrv\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.956938 kubelet[3522]: I0123 17:58:09.956804 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e3da9a3-4054-4c15-97c7-e07ca45d41b5-kube-proxy\") pod \"kube-proxy-krsn5\" (UID: \"6e3da9a3-4054-4c15-97c7-e07ca45d41b5\") " pod="kube-system/kube-proxy-krsn5" Jan 23 17:58:09.956938 kubelet[3522]: I0123 17:58:09.956880 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e3da9a3-4054-4c15-97c7-e07ca45d41b5-lib-modules\") pod \"kube-proxy-krsn5\" (UID: \"6e3da9a3-4054-4c15-97c7-e07ca45d41b5\") " pod="kube-system/kube-proxy-krsn5" Jan 23 17:58:09.957859 kubelet[3522]: I0123 17:58:09.956918 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cni-path\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.957859 kubelet[3522]: I0123 17:58:09.956977 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-net\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.957859 kubelet[3522]: I0123 17:58:09.957032 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-hubble-tls\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.957859 kubelet[3522]: I0123 17:58:09.957083 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e3da9a3-4054-4c15-97c7-e07ca45d41b5-xtables-lock\") pod \"kube-proxy-krsn5\" (UID: \"6e3da9a3-4054-4c15-97c7-e07ca45d41b5\") " pod="kube-system/kube-proxy-krsn5" Jan 23 17:58:09.957859 kubelet[3522]: I0123 17:58:09.957273 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w5cs\" (UniqueName: \"kubernetes.io/projected/6e3da9a3-4054-4c15-97c7-e07ca45d41b5-kube-api-access-7w5cs\") pod \"kube-proxy-krsn5\" (UID: \"6e3da9a3-4054-4c15-97c7-e07ca45d41b5\") " pod="kube-system/kube-proxy-krsn5" Jan 23 17:58:09.957859 kubelet[3522]: I0123 17:58:09.957319 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-run\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:09.958175 kubelet[3522]: I0123 17:58:09.957371 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-bpf-maps\") pod \"cilium-95fqw\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " pod="kube-system/cilium-95fqw" Jan 23 17:58:10.052997 systemd[1]: Created slice kubepods-besteffort-pod3f227a00_f91f_409f_977b_8bd136e53bf6.slice - libcontainer container kubepods-besteffort-pod3f227a00_f91f_409f_977b_8bd136e53bf6.slice. Jan 23 17:58:10.059497 kubelet[3522]: I0123 17:58:10.059435 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpqk\" (UniqueName: \"kubernetes.io/projected/3f227a00-f91f-409f-977b-8bd136e53bf6-kube-api-access-4qpqk\") pod \"cilium-operator-6c4d7847fc-wm2f6\" (UID: \"3f227a00-f91f-409f-977b-8bd136e53bf6\") " pod="kube-system/cilium-operator-6c4d7847fc-wm2f6" Jan 23 17:58:10.060230 kubelet[3522]: I0123 17:58:10.059679 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f227a00-f91f-409f-977b-8bd136e53bf6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wm2f6\" (UID: \"3f227a00-f91f-409f-977b-8bd136e53bf6\") " pod="kube-system/cilium-operator-6c4d7847fc-wm2f6" Jan 23 17:58:10.213318 containerd[2018]: time="2026-01-23T17:58:10.211476463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-krsn5,Uid:6e3da9a3-4054-4c15-97c7-e07ca45d41b5,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:10.247231 containerd[2018]: time="2026-01-23T17:58:10.246425432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95fqw,Uid:ab6a6388-c9ad-4a5f-b211-144c970915f9,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:10.262612 containerd[2018]: time="2026-01-23T17:58:10.262532036Z" level=info msg="connecting to shim 8e3912cb5f2788c576b315f1d4d79b7a9c1f9db35be7dca086669b7494b73e72" address="unix:///run/containerd/s/82b98a3d3bce430002185a415482e117b962d62106e76db6eb41dcb48505952e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:10.301513 containerd[2018]: time="2026-01-23T17:58:10.300909416Z" level=info msg="connecting to shim 55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c" address="unix:///run/containerd/s/040c453234bbaa521e2fc4b39a2c8a0416c3b449b2e7844ee1e0ae9e58996cb7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:10.317550 systemd[1]: Started cri-containerd-8e3912cb5f2788c576b315f1d4d79b7a9c1f9db35be7dca086669b7494b73e72.scope - libcontainer container 8e3912cb5f2788c576b315f1d4d79b7a9c1f9db35be7dca086669b7494b73e72. Jan 23 17:58:10.357784 systemd[1]: Started cri-containerd-55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c.scope - libcontainer container 55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c. Jan 23 17:58:10.362154 containerd[2018]: time="2026-01-23T17:58:10.361976756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wm2f6,Uid:3f227a00-f91f-409f-977b-8bd136e53bf6,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:10.392944 containerd[2018]: time="2026-01-23T17:58:10.392880332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-krsn5,Uid:6e3da9a3-4054-4c15-97c7-e07ca45d41b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e3912cb5f2788c576b315f1d4d79b7a9c1f9db35be7dca086669b7494b73e72\"" Jan 23 17:58:10.409900 containerd[2018]: time="2026-01-23T17:58:10.409429136Z" level=info msg="CreateContainer within sandbox \"8e3912cb5f2788c576b315f1d4d79b7a9c1f9db35be7dca086669b7494b73e72\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:58:10.415074 containerd[2018]: time="2026-01-23T17:58:10.414949112Z" level=info msg="connecting to shim 4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288" address="unix:///run/containerd/s/410afe0149fe4972d81c0cbce4c64dc502df536f270101fd356e766a9e9af106" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:10.445581 containerd[2018]: time="2026-01-23T17:58:10.445501113Z" level=info msg="Container 2c5c6f8bc6a447044b187d27bdc987dd904e378cfc13199def166b37bc28ee71: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:10.468202 containerd[2018]: time="2026-01-23T17:58:10.467948913Z" level=info msg="CreateContainer within sandbox \"8e3912cb5f2788c576b315f1d4d79b7a9c1f9db35be7dca086669b7494b73e72\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c5c6f8bc6a447044b187d27bdc987dd904e378cfc13199def166b37bc28ee71\"" Jan 23 17:58:10.470851 containerd[2018]: time="2026-01-23T17:58:10.470786973Z" level=info msg="StartContainer for \"2c5c6f8bc6a447044b187d27bdc987dd904e378cfc13199def166b37bc28ee71\"" Jan 23 17:58:10.473668 containerd[2018]: time="2026-01-23T17:58:10.472644609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95fqw,Uid:ab6a6388-c9ad-4a5f-b211-144c970915f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\"" Jan 23 17:58:10.478127 containerd[2018]: time="2026-01-23T17:58:10.477914145Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 17:58:10.482728 containerd[2018]: time="2026-01-23T17:58:10.482654037Z" level=info msg="connecting to shim 2c5c6f8bc6a447044b187d27bdc987dd904e378cfc13199def166b37bc28ee71" address="unix:///run/containerd/s/82b98a3d3bce430002185a415482e117b962d62106e76db6eb41dcb48505952e" protocol=ttrpc version=3 Jan 23 17:58:10.500785 systemd[1]: Started cri-containerd-4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288.scope - libcontainer container 4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288. Jan 23 17:58:10.551961 systemd[1]: Started cri-containerd-2c5c6f8bc6a447044b187d27bdc987dd904e378cfc13199def166b37bc28ee71.scope - libcontainer container 2c5c6f8bc6a447044b187d27bdc987dd904e378cfc13199def166b37bc28ee71. Jan 23 17:58:10.622043 containerd[2018]: time="2026-01-23T17:58:10.621955125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wm2f6,Uid:3f227a00-f91f-409f-977b-8bd136e53bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\"" Jan 23 17:58:10.672625 containerd[2018]: time="2026-01-23T17:58:10.672563638Z" level=info msg="StartContainer for \"2c5c6f8bc6a447044b187d27bdc987dd904e378cfc13199def166b37bc28ee71\" returns successfully" Jan 23 17:58:11.807896 kubelet[3522]: I0123 17:58:11.807799 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-krsn5" podStartSLOduration=2.807776111 podStartE2EDuration="2.807776111s" podCreationTimestamp="2026-01-23 17:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:11.661463255 +0000 UTC m=+6.474584050" watchObservedRunningTime="2026-01-23 17:58:11.807776111 +0000 UTC m=+6.620896894" Jan 23 17:58:18.658612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171161161.mount: Deactivated successfully. Jan 23 17:58:21.307330 containerd[2018]: time="2026-01-23T17:58:21.307242462Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:21.309702 containerd[2018]: time="2026-01-23T17:58:21.309636282Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 17:58:21.311232 containerd[2018]: time="2026-01-23T17:58:21.311123598Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:21.314142 containerd[2018]: time="2026-01-23T17:58:21.314080039Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.83609855s" Jan 23 17:58:21.314142 containerd[2018]: time="2026-01-23T17:58:21.314146099Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 17:58:21.317355 containerd[2018]: time="2026-01-23T17:58:21.316629163Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 17:58:21.323565 containerd[2018]: time="2026-01-23T17:58:21.323503147Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 17:58:21.340232 containerd[2018]: time="2026-01-23T17:58:21.339413779Z" level=info msg="Container 6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:21.361097 containerd[2018]: time="2026-01-23T17:58:21.360943087Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\"" Jan 23 17:58:21.362476 containerd[2018]: time="2026-01-23T17:58:21.362421391Z" level=info msg="StartContainer for \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\"" Jan 23 17:58:21.365939 containerd[2018]: time="2026-01-23T17:58:21.365862055Z" level=info msg="connecting to shim 6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254" address="unix:///run/containerd/s/040c453234bbaa521e2fc4b39a2c8a0416c3b449b2e7844ee1e0ae9e58996cb7" protocol=ttrpc version=3 Jan 23 17:58:21.408570 systemd[1]: Started cri-containerd-6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254.scope - libcontainer container 6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254. Jan 23 17:58:21.466350 containerd[2018]: time="2026-01-23T17:58:21.466266739Z" level=info msg="StartContainer for \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\" returns successfully" Jan 23 17:58:21.495104 systemd[1]: cri-containerd-6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254.scope: Deactivated successfully. Jan 23 17:58:21.500230 containerd[2018]: time="2026-01-23T17:58:21.500119039Z" level=info msg="received container exit event container_id:\"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\" id:\"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\" pid:3941 exited_at:{seconds:1769191101 nanos:499029103}" Jan 23 17:58:21.542270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254-rootfs.mount: Deactivated successfully. Jan 23 17:58:22.691454 containerd[2018]: time="2026-01-23T17:58:22.691359273Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 17:58:22.710772 containerd[2018]: time="2026-01-23T17:58:22.709606305Z" level=info msg="Container 92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:22.721780 containerd[2018]: time="2026-01-23T17:58:22.721723197Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\"" Jan 23 17:58:22.724682 containerd[2018]: time="2026-01-23T17:58:22.724547230Z" level=info msg="StartContainer for \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\"" Jan 23 17:58:22.729177 containerd[2018]: time="2026-01-23T17:58:22.729043258Z" level=info msg="connecting to shim 92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240" address="unix:///run/containerd/s/040c453234bbaa521e2fc4b39a2c8a0416c3b449b2e7844ee1e0ae9e58996cb7" protocol=ttrpc version=3 Jan 23 17:58:22.772502 systemd[1]: Started cri-containerd-92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240.scope - libcontainer container 92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240. Jan 23 17:58:22.837506 containerd[2018]: time="2026-01-23T17:58:22.837437602Z" level=info msg="StartContainer for \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\" returns successfully" Jan 23 17:58:22.863233 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:58:22.864441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:22.867676 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:58:22.871691 containerd[2018]: time="2026-01-23T17:58:22.870968878Z" level=info msg="received container exit event container_id:\"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\" id:\"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\" pid:3989 exited_at:{seconds:1769191102 nanos:870678358}" Jan 23 17:58:22.872864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:58:22.879110 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:58:22.880140 systemd[1]: cri-containerd-92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240.scope: Deactivated successfully. Jan 23 17:58:22.917090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:23.697315 containerd[2018]: time="2026-01-23T17:58:23.697215538Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 17:58:23.711209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240-rootfs.mount: Deactivated successfully. Jan 23 17:58:23.731489 containerd[2018]: time="2026-01-23T17:58:23.730600427Z" level=info msg="Container 54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:23.742056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303806635.mount: Deactivated successfully. Jan 23 17:58:23.749411 containerd[2018]: time="2026-01-23T17:58:23.749331623Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\"" Jan 23 17:58:23.751701 containerd[2018]: time="2026-01-23T17:58:23.751641995Z" level=info msg="StartContainer for \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\"" Jan 23 17:58:23.756056 containerd[2018]: time="2026-01-23T17:58:23.755937431Z" level=info msg="connecting to shim 54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1" address="unix:///run/containerd/s/040c453234bbaa521e2fc4b39a2c8a0416c3b449b2e7844ee1e0ae9e58996cb7" protocol=ttrpc version=3 Jan 23 17:58:23.800727 systemd[1]: Started cri-containerd-54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1.scope - libcontainer container 54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1. Jan 23 17:58:23.940547 containerd[2018]: time="2026-01-23T17:58:23.940482384Z" level=info msg="StartContainer for \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\" returns successfully" Jan 23 17:58:23.946062 systemd[1]: cri-containerd-54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1.scope: Deactivated successfully. Jan 23 17:58:23.953005 containerd[2018]: time="2026-01-23T17:58:23.952817112Z" level=info msg="received container exit event container_id:\"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\" id:\"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\" pid:4043 exited_at:{seconds:1769191103 nanos:952427148}" Jan 23 17:58:24.708878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1-rootfs.mount: Deactivated successfully. Jan 23 17:58:24.711173 containerd[2018]: time="2026-01-23T17:58:24.711101963Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 17:58:24.730428 containerd[2018]: time="2026-01-23T17:58:24.729376535Z" level=info msg="Container e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:24.750733 containerd[2018]: time="2026-01-23T17:58:24.750535992Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\"" Jan 23 17:58:24.753670 containerd[2018]: time="2026-01-23T17:58:24.753234264Z" level=info msg="StartContainer for \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\"" Jan 23 17:58:24.755971 containerd[2018]: time="2026-01-23T17:58:24.755880480Z" level=info msg="connecting to shim e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b" address="unix:///run/containerd/s/040c453234bbaa521e2fc4b39a2c8a0416c3b449b2e7844ee1e0ae9e58996cb7" protocol=ttrpc version=3 Jan 23 17:58:24.803641 systemd[1]: Started cri-containerd-e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b.scope - libcontainer container e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b. Jan 23 17:58:24.858934 systemd[1]: cri-containerd-e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b.scope: Deactivated successfully. Jan 23 17:58:24.866510 containerd[2018]: time="2026-01-23T17:58:24.866424636Z" level=info msg="received container exit event container_id:\"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\" id:\"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\" pid:4084 exited_at:{seconds:1769191104 nanos:865709820}" Jan 23 17:58:24.868223 containerd[2018]: time="2026-01-23T17:58:24.868133244Z" level=info msg="StartContainer for \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\" returns successfully" Jan 23 17:58:24.907788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b-rootfs.mount: Deactivated successfully. Jan 23 17:58:25.719557 containerd[2018]: time="2026-01-23T17:58:25.718930224Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 17:58:25.752665 containerd[2018]: time="2026-01-23T17:58:25.749594029Z" level=info msg="Container e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:25.775987 containerd[2018]: time="2026-01-23T17:58:25.775896109Z" level=info msg="CreateContainer within sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\"" Jan 23 17:58:25.776842 containerd[2018]: time="2026-01-23T17:58:25.776779441Z" level=info msg="StartContainer for \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\"" Jan 23 17:58:25.780294 containerd[2018]: time="2026-01-23T17:58:25.780219277Z" level=info msg="connecting to shim e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c" address="unix:///run/containerd/s/040c453234bbaa521e2fc4b39a2c8a0416c3b449b2e7844ee1e0ae9e58996cb7" protocol=ttrpc version=3 Jan 23 17:58:25.842509 systemd[1]: Started cri-containerd-e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c.scope - libcontainer container e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c. Jan 23 17:58:25.948494 containerd[2018]: time="2026-01-23T17:58:25.948425630Z" level=info msg="StartContainer for \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" returns successfully" Jan 23 17:58:26.133223 kubelet[3522]: I0123 17:58:26.132967 3522 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:58:26.209278 systemd[1]: Created slice kubepods-burstable-pod63f62d1d_b440_4311_9a1e_fb7799bf78d9.slice - libcontainer container kubepods-burstable-pod63f62d1d_b440_4311_9a1e_fb7799bf78d9.slice. Jan 23 17:58:26.237167 systemd[1]: Created slice kubepods-burstable-pod6f43e5b9_ad0a_4b8a_8505_612322dcdb3a.slice - libcontainer container kubepods-burstable-pod6f43e5b9_ad0a_4b8a_8505_612322dcdb3a.slice. Jan 23 17:58:26.296766 kubelet[3522]: I0123 17:58:26.296581 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f62d1d-b440-4311-9a1e-fb7799bf78d9-config-volume\") pod \"coredns-674b8bbfcf-xsxvn\" (UID: \"63f62d1d-b440-4311-9a1e-fb7799bf78d9\") " pod="kube-system/coredns-674b8bbfcf-xsxvn" Jan 23 17:58:26.296921 kubelet[3522]: I0123 17:58:26.296785 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tj48\" (UniqueName: \"kubernetes.io/projected/63f62d1d-b440-4311-9a1e-fb7799bf78d9-kube-api-access-8tj48\") pod \"coredns-674b8bbfcf-xsxvn\" (UID: \"63f62d1d-b440-4311-9a1e-fb7799bf78d9\") " pod="kube-system/coredns-674b8bbfcf-xsxvn" Jan 23 17:58:26.296982 kubelet[3522]: I0123 17:58:26.296893 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvt2v\" (UniqueName: \"kubernetes.io/projected/6f43e5b9-ad0a-4b8a-8505-612322dcdb3a-kube-api-access-vvt2v\") pod \"coredns-674b8bbfcf-db995\" (UID: \"6f43e5b9-ad0a-4b8a-8505-612322dcdb3a\") " pod="kube-system/coredns-674b8bbfcf-db995" Jan 23 17:58:26.297298 kubelet[3522]: I0123 17:58:26.297067 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f43e5b9-ad0a-4b8a-8505-612322dcdb3a-config-volume\") pod \"coredns-674b8bbfcf-db995\" (UID: \"6f43e5b9-ad0a-4b8a-8505-612322dcdb3a\") " pod="kube-system/coredns-674b8bbfcf-db995" Jan 23 17:58:26.524025 containerd[2018]: time="2026-01-23T17:58:26.523960980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xsxvn,Uid:63f62d1d-b440-4311-9a1e-fb7799bf78d9,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:26.552581 containerd[2018]: time="2026-01-23T17:58:26.552463789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-db995,Uid:6f43e5b9-ad0a-4b8a-8505-612322dcdb3a,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:26.764938 kubelet[3522]: I0123 17:58:26.764810 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-95fqw" podStartSLOduration=6.925138228 podStartE2EDuration="17.764786294s" podCreationTimestamp="2026-01-23 17:58:09 +0000 UTC" firstStartedPulling="2026-01-23 17:58:10.476749689 +0000 UTC m=+5.289870472" lastFinishedPulling="2026-01-23 17:58:21.316397659 +0000 UTC m=+16.129518538" observedRunningTime="2026-01-23 17:58:26.762804206 +0000 UTC m=+21.575925073" watchObservedRunningTime="2026-01-23 17:58:26.764786294 +0000 UTC m=+21.577907089" Jan 23 17:58:30.655049 containerd[2018]: time="2026-01-23T17:58:30.654911825Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:30.658059 containerd[2018]: time="2026-01-23T17:58:30.658016909Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 17:58:30.659152 containerd[2018]: time="2026-01-23T17:58:30.659115761Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:30.662915 containerd[2018]: time="2026-01-23T17:58:30.662870741Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 9.34618393s" Jan 23 17:58:30.663105 containerd[2018]: time="2026-01-23T17:58:30.663075593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 17:58:30.670029 containerd[2018]: time="2026-01-23T17:58:30.669981497Z" level=info msg="CreateContainer within sandbox \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 17:58:30.686062 containerd[2018]: time="2026-01-23T17:58:30.685993949Z" level=info msg="Container 6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:30.694892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070051502.mount: Deactivated successfully. Jan 23 17:58:30.702335 containerd[2018]: time="2026-01-23T17:58:30.702276245Z" level=info msg="CreateContainer within sandbox \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\"" Jan 23 17:58:30.704340 containerd[2018]: time="2026-01-23T17:58:30.704274113Z" level=info msg="StartContainer for \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\"" Jan 23 17:58:30.706537 containerd[2018]: time="2026-01-23T17:58:30.706427969Z" level=info msg="connecting to shim 6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff" address="unix:///run/containerd/s/410afe0149fe4972d81c0cbce4c64dc502df536f270101fd356e766a9e9af106" protocol=ttrpc version=3 Jan 23 17:58:30.749500 systemd[1]: Started cri-containerd-6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff.scope - libcontainer container 6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff. Jan 23 17:58:30.810611 containerd[2018]: time="2026-01-23T17:58:30.810471750Z" level=info msg="StartContainer for \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" returns successfully" Jan 23 17:58:34.996042 systemd-networkd[1830]: cilium_host: Link UP Jan 23 17:58:34.997824 systemd-networkd[1830]: cilium_net: Link UP Jan 23 17:58:34.998713 systemd-networkd[1830]: cilium_net: Gained carrier Jan 23 17:58:34.999091 systemd-networkd[1830]: cilium_host: Gained carrier Jan 23 17:58:35.011333 (udev-worker)[4295]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:35.012298 (udev-worker)[4296]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:35.152443 systemd-networkd[1830]: cilium_net: Gained IPv6LL Jan 23 17:58:35.193365 systemd-networkd[1830]: cilium_vxlan: Link UP Jan 23 17:58:35.193938 systemd-networkd[1830]: cilium_vxlan: Gained carrier Jan 23 17:58:35.393662 systemd-networkd[1830]: cilium_host: Gained IPv6LL Jan 23 17:58:35.773234 kernel: NET: Registered PF_ALG protocol family Jan 23 17:58:37.139051 systemd-networkd[1830]: lxc_health: Link UP Jan 23 17:58:37.147846 systemd-networkd[1830]: lxc_health: Gained carrier Jan 23 17:58:37.202286 systemd-networkd[1830]: cilium_vxlan: Gained IPv6LL Jan 23 17:58:37.633232 kernel: eth0: renamed from tmp85a7a Jan 23 17:58:37.636349 (udev-worker)[4623]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:37.636919 systemd-networkd[1830]: lxc4a515a1eb9a8: Link UP Jan 23 17:58:37.637555 systemd-networkd[1830]: lxc4a515a1eb9a8: Gained carrier Jan 23 17:58:37.655416 systemd-networkd[1830]: lxcb73406ab9a73: Link UP Jan 23 17:58:37.666813 (udev-worker)[4304]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:37.671898 kernel: eth0: renamed from tmp8364a Jan 23 17:58:37.671288 systemd-networkd[1830]: lxcb73406ab9a73: Gained carrier Jan 23 17:58:38.296869 kubelet[3522]: I0123 17:58:38.296757 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wm2f6" podStartSLOduration=9.257281419 podStartE2EDuration="29.296734295s" podCreationTimestamp="2026-01-23 17:58:09 +0000 UTC" firstStartedPulling="2026-01-23 17:58:10.624616545 +0000 UTC m=+5.437737316" lastFinishedPulling="2026-01-23 17:58:30.664069421 +0000 UTC m=+25.477190192" observedRunningTime="2026-01-23 17:58:31.857856631 +0000 UTC m=+26.670977450" watchObservedRunningTime="2026-01-23 17:58:38.296734295 +0000 UTC m=+33.109855078" Jan 23 17:58:38.993288 systemd-networkd[1830]: lxc_health: Gained IPv6LL Jan 23 17:58:39.632591 systemd-networkd[1830]: lxc4a515a1eb9a8: Gained IPv6LL Jan 23 17:58:39.696618 systemd-networkd[1830]: lxcb73406ab9a73: Gained IPv6LL Jan 23 17:58:41.753506 ntpd[2186]: Listen normally on 6 cilium_host 192.168.0.6:123 Jan 23 17:58:41.754415 ntpd[2186]: 23 Jan 17:58:41 ntpd[2186]: Listen normally on 6 cilium_host 192.168.0.6:123 Jan 23 17:58:41.754415 ntpd[2186]: 23 Jan 17:58:41 ntpd[2186]: Listen normally on 7 cilium_net [fe80::28fd:8aff:fe55:500%4]:123 Jan 23 17:58:41.754415 ntpd[2186]: 23 Jan 17:58:41 ntpd[2186]: Listen normally on 8 cilium_host [fe80::c4b0:4ff:fe7c:9257%5]:123 Jan 23 17:58:41.754415 ntpd[2186]: 23 Jan 17:58:41 ntpd[2186]: Listen normally on 9 cilium_vxlan [fe80::c24:3cff:fe41:e87%6]:123 Jan 23 17:58:41.754415 ntpd[2186]: 23 Jan 17:58:41 ntpd[2186]: Listen normally on 10 lxc_health [fe80::b037:a9ff:febe:3e27%8]:123 Jan 23 17:58:41.754415 ntpd[2186]: 23 Jan 17:58:41 ntpd[2186]: Listen normally on 11 lxc4a515a1eb9a8 [fe80::9427:6aff:fea4:aff2%10]:123 Jan 23 17:58:41.754415 ntpd[2186]: 23 Jan 17:58:41 ntpd[2186]: Listen normally on 12 lxcb73406ab9a73 [fe80::2cbf:63ff:fed5:8324%12]:123 Jan 23 17:58:41.753596 ntpd[2186]: Listen normally on 7 cilium_net [fe80::28fd:8aff:fe55:500%4]:123 Jan 23 17:58:41.753661 ntpd[2186]: Listen normally on 8 cilium_host [fe80::c4b0:4ff:fe7c:9257%5]:123 Jan 23 17:58:41.753707 ntpd[2186]: Listen normally on 9 cilium_vxlan [fe80::c24:3cff:fe41:e87%6]:123 Jan 23 17:58:41.753751 ntpd[2186]: Listen normally on 10 lxc_health [fe80::b037:a9ff:febe:3e27%8]:123 Jan 23 17:58:41.753799 ntpd[2186]: Listen normally on 11 lxc4a515a1eb9a8 [fe80::9427:6aff:fea4:aff2%10]:123 Jan 23 17:58:41.753842 ntpd[2186]: Listen normally on 12 lxcb73406ab9a73 [fe80::2cbf:63ff:fed5:8324%12]:123 Jan 23 17:58:46.200529 containerd[2018]: time="2026-01-23T17:58:46.200164674Z" level=info msg="connecting to shim 8364a5cc67c5d7fd051c851194fc60877185dbc881191a4d6e77a6e428deed05" address="unix:///run/containerd/s/2540e75efa8c7ccc6953c9abf25ed4931c4570eb1b67a6e45e12c120be999ed6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:46.238414 containerd[2018]: time="2026-01-23T17:58:46.238300278Z" level=info msg="connecting to shim 85a7a241b06bf5e771ecd9b1fca121ed9e77a881edbe61c3782bec83216320a1" address="unix:///run/containerd/s/aa524782f6a9b07f9b6d4d686716ae170405120573705f2e15ba5159d558564c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:46.274501 systemd[1]: Started cri-containerd-8364a5cc67c5d7fd051c851194fc60877185dbc881191a4d6e77a6e428deed05.scope - libcontainer container 8364a5cc67c5d7fd051c851194fc60877185dbc881191a4d6e77a6e428deed05. Jan 23 17:58:46.328632 systemd[1]: Started cri-containerd-85a7a241b06bf5e771ecd9b1fca121ed9e77a881edbe61c3782bec83216320a1.scope - libcontainer container 85a7a241b06bf5e771ecd9b1fca121ed9e77a881edbe61c3782bec83216320a1. Jan 23 17:58:46.480802 containerd[2018]: time="2026-01-23T17:58:46.480064412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xsxvn,Uid:63f62d1d-b440-4311-9a1e-fb7799bf78d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"85a7a241b06bf5e771ecd9b1fca121ed9e77a881edbe61c3782bec83216320a1\"" Jan 23 17:58:46.493914 containerd[2018]: time="2026-01-23T17:58:46.493840280Z" level=info msg="CreateContainer within sandbox \"85a7a241b06bf5e771ecd9b1fca121ed9e77a881edbe61c3782bec83216320a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:46.499211 containerd[2018]: time="2026-01-23T17:58:46.498976424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-db995,Uid:6f43e5b9-ad0a-4b8a-8505-612322dcdb3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8364a5cc67c5d7fd051c851194fc60877185dbc881191a4d6e77a6e428deed05\"" Jan 23 17:58:46.508606 containerd[2018]: time="2026-01-23T17:58:46.508534004Z" level=info msg="CreateContainer within sandbox \"8364a5cc67c5d7fd051c851194fc60877185dbc881191a4d6e77a6e428deed05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:46.521558 containerd[2018]: time="2026-01-23T17:58:46.521488064Z" level=info msg="Container 22f28548276b5b1195e9955d93404fcddf2dd63f1c7af3e8d8425e917973dbe5: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:46.532813 containerd[2018]: time="2026-01-23T17:58:46.532739540Z" level=info msg="CreateContainer within sandbox \"85a7a241b06bf5e771ecd9b1fca121ed9e77a881edbe61c3782bec83216320a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22f28548276b5b1195e9955d93404fcddf2dd63f1c7af3e8d8425e917973dbe5\"" Jan 23 17:58:46.533879 containerd[2018]: time="2026-01-23T17:58:46.533817332Z" level=info msg="StartContainer for \"22f28548276b5b1195e9955d93404fcddf2dd63f1c7af3e8d8425e917973dbe5\"" Jan 23 17:58:46.536701 containerd[2018]: time="2026-01-23T17:58:46.536609312Z" level=info msg="Container e36a84fe026f123ecc016ec4aa04f049045a830a05330415893b1ba9c3c380bd: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:46.538342 containerd[2018]: time="2026-01-23T17:58:46.538282712Z" level=info msg="connecting to shim 22f28548276b5b1195e9955d93404fcddf2dd63f1c7af3e8d8425e917973dbe5" address="unix:///run/containerd/s/aa524782f6a9b07f9b6d4d686716ae170405120573705f2e15ba5159d558564c" protocol=ttrpc version=3 Jan 23 17:58:46.547212 containerd[2018]: time="2026-01-23T17:58:46.547047632Z" level=info msg="CreateContainer within sandbox \"8364a5cc67c5d7fd051c851194fc60877185dbc881191a4d6e77a6e428deed05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e36a84fe026f123ecc016ec4aa04f049045a830a05330415893b1ba9c3c380bd\"" Jan 23 17:58:46.550343 containerd[2018]: time="2026-01-23T17:58:46.548361752Z" level=info msg="StartContainer for \"e36a84fe026f123ecc016ec4aa04f049045a830a05330415893b1ba9c3c380bd\"" Jan 23 17:58:46.551078 containerd[2018]: time="2026-01-23T17:58:46.551004248Z" level=info msg="connecting to shim e36a84fe026f123ecc016ec4aa04f049045a830a05330415893b1ba9c3c380bd" address="unix:///run/containerd/s/2540e75efa8c7ccc6953c9abf25ed4931c4570eb1b67a6e45e12c120be999ed6" protocol=ttrpc version=3 Jan 23 17:58:46.585636 systemd[1]: Started cri-containerd-22f28548276b5b1195e9955d93404fcddf2dd63f1c7af3e8d8425e917973dbe5.scope - libcontainer container 22f28548276b5b1195e9955d93404fcddf2dd63f1c7af3e8d8425e917973dbe5. Jan 23 17:58:46.601828 systemd[1]: Started cri-containerd-e36a84fe026f123ecc016ec4aa04f049045a830a05330415893b1ba9c3c380bd.scope - libcontainer container e36a84fe026f123ecc016ec4aa04f049045a830a05330415893b1ba9c3c380bd. Jan 23 17:58:46.687861 containerd[2018]: time="2026-01-23T17:58:46.687802437Z" level=info msg="StartContainer for \"22f28548276b5b1195e9955d93404fcddf2dd63f1c7af3e8d8425e917973dbe5\" returns successfully" Jan 23 17:58:46.701708 containerd[2018]: time="2026-01-23T17:58:46.701616477Z" level=info msg="StartContainer for \"e36a84fe026f123ecc016ec4aa04f049045a830a05330415893b1ba9c3c380bd\" returns successfully" Jan 23 17:58:46.911570 kubelet[3522]: I0123 17:58:46.911370 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xsxvn" podStartSLOduration=37.91134085 podStartE2EDuration="37.91134085s" podCreationTimestamp="2026-01-23 17:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:46.876011613 +0000 UTC m=+41.689132408" watchObservedRunningTime="2026-01-23 17:58:46.91134085 +0000 UTC m=+41.724461729" Jan 23 17:58:46.913756 kubelet[3522]: I0123 17:58:46.913423 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-db995" podStartSLOduration=37.91339885 podStartE2EDuration="37.91339885s" podCreationTimestamp="2026-01-23 17:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:46.908919418 +0000 UTC m=+41.722040225" watchObservedRunningTime="2026-01-23 17:58:46.91339885 +0000 UTC m=+41.726519765" Jan 23 17:58:51.612527 systemd[1]: Started sshd@7-172.31.28.159:22-68.220.241.50:48356.service - OpenSSH per-connection server daemon (68.220.241.50:48356). Jan 23 17:58:52.171325 sshd[4843]: Accepted publickey for core from 68.220.241.50 port 48356 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:52.173611 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:52.182134 systemd-logind[1983]: New session 8 of user core. Jan 23 17:58:52.191468 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:58:52.695322 sshd[4846]: Connection closed by 68.220.241.50 port 48356 Jan 23 17:58:52.696133 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:52.702848 systemd-logind[1983]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:58:52.704895 systemd[1]: sshd@7-172.31.28.159:22-68.220.241.50:48356.service: Deactivated successfully. Jan 23 17:58:52.711333 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:58:52.714171 systemd-logind[1983]: Removed session 8. Jan 23 17:58:57.781707 systemd[1]: Started sshd@8-172.31.28.159:22-68.220.241.50:56322.service - OpenSSH per-connection server daemon (68.220.241.50:56322). Jan 23 17:58:58.320457 sshd[4860]: Accepted publickey for core from 68.220.241.50 port 56322 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:58.322939 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:58.330681 systemd-logind[1983]: New session 9 of user core. Jan 23 17:58:58.346468 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:58:58.797470 sshd[4863]: Connection closed by 68.220.241.50 port 56322 Jan 23 17:58:58.796619 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:58.805977 systemd[1]: sshd@8-172.31.28.159:22-68.220.241.50:56322.service: Deactivated successfully. Jan 23 17:58:58.811560 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:58:58.816136 systemd-logind[1983]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:58:58.818925 systemd-logind[1983]: Removed session 9. Jan 23 17:59:03.895378 systemd[1]: Started sshd@9-172.31.28.159:22-68.220.241.50:50782.service - OpenSSH per-connection server daemon (68.220.241.50:50782). Jan 23 17:59:04.413590 sshd[4876]: Accepted publickey for core from 68.220.241.50 port 50782 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:04.416028 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:04.424820 systemd-logind[1983]: New session 10 of user core. Jan 23 17:59:04.431498 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:59:04.885515 sshd[4879]: Connection closed by 68.220.241.50 port 50782 Jan 23 17:59:04.886414 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:04.894105 systemd[1]: sshd@9-172.31.28.159:22-68.220.241.50:50782.service: Deactivated successfully. Jan 23 17:59:04.900803 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:59:04.904695 systemd-logind[1983]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:59:04.907618 systemd-logind[1983]: Removed session 10. Jan 23 17:59:09.993070 systemd[1]: Started sshd@10-172.31.28.159:22-68.220.241.50:50790.service - OpenSSH per-connection server daemon (68.220.241.50:50790). Jan 23 17:59:10.564273 sshd[4894]: Accepted publickey for core from 68.220.241.50 port 50790 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:10.566791 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:10.576284 systemd-logind[1983]: New session 11 of user core. Jan 23 17:59:10.582508 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:59:11.063886 sshd[4898]: Connection closed by 68.220.241.50 port 50790 Jan 23 17:59:11.064743 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:11.073265 systemd[1]: sshd@10-172.31.28.159:22-68.220.241.50:50790.service: Deactivated successfully. Jan 23 17:59:11.078533 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:59:11.083292 systemd-logind[1983]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:59:11.087479 systemd-logind[1983]: Removed session 11. Jan 23 17:59:11.168662 systemd[1]: Started sshd@11-172.31.28.159:22-68.220.241.50:50798.service - OpenSSH per-connection server daemon (68.220.241.50:50798). Jan 23 17:59:11.727698 sshd[4914]: Accepted publickey for core from 68.220.241.50 port 50798 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:11.730662 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:11.738701 systemd-logind[1983]: New session 12 of user core. Jan 23 17:59:11.744474 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:59:12.317296 sshd[4917]: Connection closed by 68.220.241.50 port 50798 Jan 23 17:59:12.316441 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:12.325314 systemd[1]: sshd@11-172.31.28.159:22-68.220.241.50:50798.service: Deactivated successfully. Jan 23 17:59:12.326254 systemd-logind[1983]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:59:12.331257 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:59:12.336064 systemd-logind[1983]: Removed session 12. Jan 23 17:59:12.417700 systemd[1]: Started sshd@12-172.31.28.159:22-68.220.241.50:50808.service - OpenSSH per-connection server daemon (68.220.241.50:50808). Jan 23 17:59:12.980260 sshd[4927]: Accepted publickey for core from 68.220.241.50 port 50808 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:12.981022 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:12.991482 systemd-logind[1983]: New session 13 of user core. Jan 23 17:59:13.001472 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:59:13.477336 sshd[4930]: Connection closed by 68.220.241.50 port 50808 Jan 23 17:59:13.478404 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:13.486916 systemd-logind[1983]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:59:13.487742 systemd[1]: sshd@12-172.31.28.159:22-68.220.241.50:50808.service: Deactivated successfully. Jan 23 17:59:13.496345 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:59:13.500989 systemd-logind[1983]: Removed session 13. Jan 23 17:59:18.565945 systemd[1]: Started sshd@13-172.31.28.159:22-68.220.241.50:55972.service - OpenSSH per-connection server daemon (68.220.241.50:55972). Jan 23 17:59:19.100244 sshd[4943]: Accepted publickey for core from 68.220.241.50 port 55972 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:19.102329 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:19.111271 systemd-logind[1983]: New session 14 of user core. Jan 23 17:59:19.118461 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:59:19.578242 sshd[4946]: Connection closed by 68.220.241.50 port 55972 Jan 23 17:59:19.579502 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:19.585673 systemd[1]: sshd@13-172.31.28.159:22-68.220.241.50:55972.service: Deactivated successfully. Jan 23 17:59:19.590829 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:59:19.595169 systemd-logind[1983]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:59:19.597575 systemd-logind[1983]: Removed session 14. Jan 23 17:59:24.675619 systemd[1]: Started sshd@14-172.31.28.159:22-68.220.241.50:39616.service - OpenSSH per-connection server daemon (68.220.241.50:39616). Jan 23 17:59:25.196927 sshd[4958]: Accepted publickey for core from 68.220.241.50 port 39616 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:25.199608 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:25.207254 systemd-logind[1983]: New session 15 of user core. Jan 23 17:59:25.224455 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:59:25.674573 sshd[4961]: Connection closed by 68.220.241.50 port 39616 Jan 23 17:59:25.675437 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:25.685031 systemd-logind[1983]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:59:25.686749 systemd[1]: sshd@14-172.31.28.159:22-68.220.241.50:39616.service: Deactivated successfully. Jan 23 17:59:25.693036 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:59:25.699599 systemd-logind[1983]: Removed session 15. Jan 23 17:59:30.780734 systemd[1]: Started sshd@15-172.31.28.159:22-68.220.241.50:39624.service - OpenSSH per-connection server daemon (68.220.241.50:39624). Jan 23 17:59:31.337710 sshd[4975]: Accepted publickey for core from 68.220.241.50 port 39624 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:31.339943 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:31.347741 systemd-logind[1983]: New session 16 of user core. Jan 23 17:59:31.360440 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:59:31.834105 sshd[4978]: Connection closed by 68.220.241.50 port 39624 Jan 23 17:59:31.834612 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:31.842390 systemd-logind[1983]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:59:31.842777 systemd[1]: sshd@15-172.31.28.159:22-68.220.241.50:39624.service: Deactivated successfully. Jan 23 17:59:31.847849 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:59:31.854289 systemd-logind[1983]: Removed session 16. Jan 23 17:59:31.930127 systemd[1]: Started sshd@16-172.31.28.159:22-68.220.241.50:39628.service - OpenSSH per-connection server daemon (68.220.241.50:39628). Jan 23 17:59:32.463885 sshd[4990]: Accepted publickey for core from 68.220.241.50 port 39628 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:32.466247 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:32.474117 systemd-logind[1983]: New session 17 of user core. Jan 23 17:59:32.483452 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:59:33.025081 sshd[4993]: Connection closed by 68.220.241.50 port 39628 Jan 23 17:59:33.025914 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:33.034149 systemd[1]: sshd@16-172.31.28.159:22-68.220.241.50:39628.service: Deactivated successfully. Jan 23 17:59:33.039877 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:59:33.044844 systemd-logind[1983]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:59:33.047040 systemd-logind[1983]: Removed session 17. Jan 23 17:59:33.141807 systemd[1]: Started sshd@17-172.31.28.159:22-68.220.241.50:49088.service - OpenSSH per-connection server daemon (68.220.241.50:49088). Jan 23 17:59:33.710740 sshd[5003]: Accepted publickey for core from 68.220.241.50 port 49088 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:33.713775 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:33.723114 systemd-logind[1983]: New session 18 of user core. Jan 23 17:59:33.731483 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:59:35.090053 sshd[5006]: Connection closed by 68.220.241.50 port 49088 Jan 23 17:59:35.088978 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:35.097975 systemd[1]: sshd@17-172.31.28.159:22-68.220.241.50:49088.service: Deactivated successfully. Jan 23 17:59:35.105820 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:59:35.111314 systemd-logind[1983]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:59:35.115810 systemd-logind[1983]: Removed session 18. Jan 23 17:59:35.186005 systemd[1]: Started sshd@18-172.31.28.159:22-68.220.241.50:49092.service - OpenSSH per-connection server daemon (68.220.241.50:49092). Jan 23 17:59:35.742489 sshd[5023]: Accepted publickey for core from 68.220.241.50 port 49092 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:35.746170 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:35.754299 systemd-logind[1983]: New session 19 of user core. Jan 23 17:59:35.762433 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:59:36.505303 sshd[5026]: Connection closed by 68.220.241.50 port 49092 Jan 23 17:59:36.506398 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:36.513533 systemd[1]: sshd@18-172.31.28.159:22-68.220.241.50:49092.service: Deactivated successfully. Jan 23 17:59:36.518293 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:59:36.523479 systemd-logind[1983]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:59:36.526816 systemd-logind[1983]: Removed session 19. Jan 23 17:59:36.591814 systemd[1]: Started sshd@19-172.31.28.159:22-68.220.241.50:49094.service - OpenSSH per-connection server daemon (68.220.241.50:49094). Jan 23 17:59:37.118543 sshd[5036]: Accepted publickey for core from 68.220.241.50 port 49094 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:37.122437 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:37.132093 systemd-logind[1983]: New session 20 of user core. Jan 23 17:59:37.139473 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 17:59:37.586246 sshd[5039]: Connection closed by 68.220.241.50 port 49094 Jan 23 17:59:37.587064 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:37.593901 systemd[1]: sshd@19-172.31.28.159:22-68.220.241.50:49094.service: Deactivated successfully. Jan 23 17:59:37.599034 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 17:59:37.603090 systemd-logind[1983]: Session 20 logged out. Waiting for processes to exit. Jan 23 17:59:37.606414 systemd-logind[1983]: Removed session 20. Jan 23 17:59:42.677574 systemd[1]: Started sshd@20-172.31.28.159:22-68.220.241.50:34998.service - OpenSSH per-connection server daemon (68.220.241.50:34998). Jan 23 17:59:43.196113 sshd[5056]: Accepted publickey for core from 68.220.241.50 port 34998 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:43.197761 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:43.205376 systemd-logind[1983]: New session 21 of user core. Jan 23 17:59:43.210451 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 17:59:43.664440 sshd[5059]: Connection closed by 68.220.241.50 port 34998 Jan 23 17:59:43.665310 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:43.672018 systemd[1]: sshd@20-172.31.28.159:22-68.220.241.50:34998.service: Deactivated successfully. Jan 23 17:59:43.680253 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 17:59:43.683495 systemd-logind[1983]: Session 21 logged out. Waiting for processes to exit. Jan 23 17:59:43.687034 systemd-logind[1983]: Removed session 21. Jan 23 17:59:46.586862 update_engine[1988]: I20260123 17:59:46.586777 1988 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 17:59:46.586862 update_engine[1988]: I20260123 17:59:46.586850 1988 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 17:59:46.587441 update_engine[1988]: I20260123 17:59:46.587270 1988 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.588910 1988 omaha_request_params.cc:62] Current group set to stable Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.589066 1988 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.589098 1988 update_attempter.cc:643] Scheduling an action processor start. Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.589130 1988 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.589214 1988 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.589315 1988 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.589334 1988 omaha_request_action.cc:272] Request: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: Jan 23 17:59:46.589812 update_engine[1988]: I20260123 17:59:46.589349 1988 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:59:46.590528 locksmithd[2035]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 17:59:46.593316 update_engine[1988]: I20260123 17:59:46.593271 1988 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:59:46.594632 update_engine[1988]: I20260123 17:59:46.594587 1988 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:59:46.603485 update_engine[1988]: E20260123 17:59:46.603343 1988 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:59:46.603485 update_engine[1988]: I20260123 17:59:46.603444 1988 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 17:59:48.760063 systemd[1]: Started sshd@21-172.31.28.159:22-68.220.241.50:35006.service - OpenSSH per-connection server daemon (68.220.241.50:35006). Jan 23 17:59:49.285541 sshd[5071]: Accepted publickey for core from 68.220.241.50 port 35006 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:49.287915 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:49.297534 systemd-logind[1983]: New session 22 of user core. Jan 23 17:59:49.303500 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 17:59:49.752333 sshd[5074]: Connection closed by 68.220.241.50 port 35006 Jan 23 17:59:49.753617 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:49.759494 systemd[1]: sshd@21-172.31.28.159:22-68.220.241.50:35006.service: Deactivated successfully. Jan 23 17:59:49.765017 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 17:59:49.770106 systemd-logind[1983]: Session 22 logged out. Waiting for processes to exit. Jan 23 17:59:49.772070 systemd-logind[1983]: Removed session 22. Jan 23 17:59:49.850178 systemd[1]: Started sshd@22-172.31.28.159:22-68.220.241.50:35014.service - OpenSSH per-connection server daemon (68.220.241.50:35014). Jan 23 17:59:50.375518 sshd[5086]: Accepted publickey for core from 68.220.241.50 port 35014 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:50.377973 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:50.388070 systemd-logind[1983]: New session 23 of user core. Jan 23 17:59:50.393484 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 17:59:53.966487 containerd[2018]: time="2026-01-23T17:59:53.966408831Z" level=info msg="StopContainer for \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" with timeout 30 (s)" Jan 23 17:59:53.969597 containerd[2018]: time="2026-01-23T17:59:53.969394011Z" level=info msg="Stop container \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" with signal terminated" Jan 23 17:59:54.005721 systemd[1]: cri-containerd-6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff.scope: Deactivated successfully. Jan 23 17:59:54.012175 containerd[2018]: time="2026-01-23T17:59:54.012008687Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:59:54.013467 containerd[2018]: time="2026-01-23T17:59:54.013393127Z" level=info msg="received container exit event container_id:\"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" id:\"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" pid:4268 exited_at:{seconds:1769191194 nanos:12720695}" Jan 23 17:59:54.028484 containerd[2018]: time="2026-01-23T17:59:54.028432127Z" level=info msg="StopContainer for \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" with timeout 2 (s)" Jan 23 17:59:54.030103 containerd[2018]: time="2026-01-23T17:59:54.029931563Z" level=info msg="Stop container \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" with signal terminated" Jan 23 17:59:54.047956 systemd-networkd[1830]: lxc_health: Link DOWN Jan 23 17:59:54.047977 systemd-networkd[1830]: lxc_health: Lost carrier Jan 23 17:59:54.088089 systemd[1]: cri-containerd-e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c.scope: Deactivated successfully. Jan 23 17:59:54.089960 systemd[1]: cri-containerd-e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c.scope: Consumed 14.417s CPU time, 124.6M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 17:59:54.096531 containerd[2018]: time="2026-01-23T17:59:54.096384299Z" level=info msg="received container exit event container_id:\"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" id:\"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" pid:4124 exited_at:{seconds:1769191194 nanos:94529723}" Jan 23 17:59:54.103106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff-rootfs.mount: Deactivated successfully. Jan 23 17:59:54.127013 containerd[2018]: time="2026-01-23T17:59:54.126938256Z" level=info msg="StopContainer for \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" returns successfully" Jan 23 17:59:54.128771 containerd[2018]: time="2026-01-23T17:59:54.127982220Z" level=info msg="StopPodSandbox for \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\"" Jan 23 17:59:54.128771 containerd[2018]: time="2026-01-23T17:59:54.128456916Z" level=info msg="Container to stop \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:54.149011 systemd[1]: cri-containerd-4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288.scope: Deactivated successfully. Jan 23 17:59:54.153739 containerd[2018]: time="2026-01-23T17:59:54.153686412Z" level=info msg="received sandbox exit event container_id:\"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" id:\"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" exit_status:137 exited_at:{seconds:1769191194 nanos:153382740}" monitor_name=podsandbox Jan 23 17:59:54.176573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c-rootfs.mount: Deactivated successfully. Jan 23 17:59:54.192080 containerd[2018]: time="2026-01-23T17:59:54.191956644Z" level=info msg="StopContainer for \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" returns successfully" Jan 23 17:59:54.193004 containerd[2018]: time="2026-01-23T17:59:54.192636660Z" level=info msg="StopPodSandbox for \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\"" Jan 23 17:59:54.193236 containerd[2018]: time="2026-01-23T17:59:54.192731292Z" level=info msg="Container to stop \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:54.193236 containerd[2018]: time="2026-01-23T17:59:54.193130940Z" level=info msg="Container to stop \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:54.193236 containerd[2018]: time="2026-01-23T17:59:54.193156008Z" level=info msg="Container to stop \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:54.193815 containerd[2018]: time="2026-01-23T17:59:54.193417404Z" level=info msg="Container to stop \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:54.193815 containerd[2018]: time="2026-01-23T17:59:54.193452636Z" level=info msg="Container to stop \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 17:59:54.217861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288-rootfs.mount: Deactivated successfully. Jan 23 17:59:54.222943 containerd[2018]: time="2026-01-23T17:59:54.222894624Z" level=info msg="shim disconnected" id=4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288 namespace=k8s.io Jan 23 17:59:54.223384 containerd[2018]: time="2026-01-23T17:59:54.223063944Z" level=warning msg="cleaning up after shim disconnected" id=4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288 namespace=k8s.io Jan 23 17:59:54.223384 containerd[2018]: time="2026-01-23T17:59:54.223115400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 17:59:54.224306 systemd[1]: cri-containerd-55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c.scope: Deactivated successfully. Jan 23 17:59:54.230674 containerd[2018]: time="2026-01-23T17:59:54.230589528Z" level=info msg="received sandbox exit event container_id:\"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" id:\"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" exit_status:137 exited_at:{seconds:1769191194 nanos:229507932}" monitor_name=podsandbox Jan 23 17:59:54.261749 containerd[2018]: time="2026-01-23T17:59:54.261680604Z" level=info msg="received sandbox container exit event sandbox_id:\"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" exit_status:137 exited_at:{seconds:1769191194 nanos:153382740}" monitor_name=criService Jan 23 17:59:54.266314 containerd[2018]: time="2026-01-23T17:59:54.264445152Z" level=info msg="TearDown network for sandbox \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" successfully" Jan 23 17:59:54.266314 containerd[2018]: time="2026-01-23T17:59:54.264499980Z" level=info msg="StopPodSandbox for \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" returns successfully" Jan 23 17:59:54.267922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288-shm.mount: Deactivated successfully. Jan 23 17:59:54.294276 containerd[2018]: time="2026-01-23T17:59:54.293098440Z" level=info msg="shim disconnected" id=55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c namespace=k8s.io Jan 23 17:59:54.294276 containerd[2018]: time="2026-01-23T17:59:54.293156940Z" level=warning msg="cleaning up after shim disconnected" id=55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c namespace=k8s.io Jan 23 17:59:54.294276 containerd[2018]: time="2026-01-23T17:59:54.293860908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 17:59:54.293839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c-rootfs.mount: Deactivated successfully. Jan 23 17:59:54.325152 containerd[2018]: time="2026-01-23T17:59:54.325092900Z" level=info msg="TearDown network for sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" successfully" Jan 23 17:59:54.325152 containerd[2018]: time="2026-01-23T17:59:54.325146096Z" level=info msg="StopPodSandbox for \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" returns successfully" Jan 23 17:59:54.325821 containerd[2018]: time="2026-01-23T17:59:54.325703004Z" level=info msg="received sandbox container exit event sandbox_id:\"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" exit_status:137 exited_at:{seconds:1769191194 nanos:229507932}" monitor_name=criService Jan 23 17:59:54.403022 kubelet[3522]: I0123 17:59:54.402961 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qpqk\" (UniqueName: \"kubernetes.io/projected/3f227a00-f91f-409f-977b-8bd136e53bf6-kube-api-access-4qpqk\") pod \"3f227a00-f91f-409f-977b-8bd136e53bf6\" (UID: \"3f227a00-f91f-409f-977b-8bd136e53bf6\") " Jan 23 17:59:54.403614 kubelet[3522]: I0123 17:59:54.403033 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f227a00-f91f-409f-977b-8bd136e53bf6-cilium-config-path\") pod \"3f227a00-f91f-409f-977b-8bd136e53bf6\" (UID: \"3f227a00-f91f-409f-977b-8bd136e53bf6\") " Jan 23 17:59:54.408152 kubelet[3522]: I0123 17:59:54.408057 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f227a00-f91f-409f-977b-8bd136e53bf6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f227a00-f91f-409f-977b-8bd136e53bf6" (UID: "3f227a00-f91f-409f-977b-8bd136e53bf6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:54.409721 kubelet[3522]: I0123 17:59:54.409641 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f227a00-f91f-409f-977b-8bd136e53bf6-kube-api-access-4qpqk" (OuterVolumeSpecName: "kube-api-access-4qpqk") pod "3f227a00-f91f-409f-977b-8bd136e53bf6" (UID: "3f227a00-f91f-409f-977b-8bd136e53bf6"). InnerVolumeSpecName "kube-api-access-4qpqk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:54.503673 kubelet[3522]: I0123 17:59:54.503538 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-cgroup\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503673 kubelet[3522]: I0123 17:59:54.503609 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-net\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503673 kubelet[3522]: I0123 17:59:54.503655 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-hostproc\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503919 kubelet[3522]: I0123 17:59:54.503700 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-hubble-tls\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503919 kubelet[3522]: I0123 17:59:54.503740 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-config-path\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503919 kubelet[3522]: I0123 17:59:54.503772 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-etc-cni-netd\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503919 kubelet[3522]: I0123 17:59:54.503806 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-xtables-lock\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503919 kubelet[3522]: I0123 17:59:54.503871 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-lib-modules\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.503919 kubelet[3522]: I0123 17:59:54.503911 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab6a6388-c9ad-4a5f-b211-144c970915f9-clustermesh-secrets\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.504287 kubelet[3522]: I0123 17:59:54.503954 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srgrv\" (UniqueName: \"kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-kube-api-access-srgrv\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.504287 kubelet[3522]: I0123 17:59:54.503986 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-bpf-maps\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.504287 kubelet[3522]: I0123 17:59:54.504022 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-kernel\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.504287 kubelet[3522]: I0123 17:59:54.504057 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cni-path\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.504287 kubelet[3522]: I0123 17:59:54.504091 3522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-run\") pod \"ab6a6388-c9ad-4a5f-b211-144c970915f9\" (UID: \"ab6a6388-c9ad-4a5f-b211-144c970915f9\") " Jan 23 17:59:54.504287 kubelet[3522]: I0123 17:59:54.504163 3522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qpqk\" (UniqueName: \"kubernetes.io/projected/3f227a00-f91f-409f-977b-8bd136e53bf6-kube-api-access-4qpqk\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.507115 kubelet[3522]: I0123 17:59:54.504226 3522 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f227a00-f91f-409f-977b-8bd136e53bf6-cilium-config-path\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.507115 kubelet[3522]: I0123 17:59:54.504293 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507115 kubelet[3522]: I0123 17:59:54.504352 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507115 kubelet[3522]: I0123 17:59:54.504388 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507115 kubelet[3522]: I0123 17:59:54.504424 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-hostproc" (OuterVolumeSpecName: "hostproc") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507442 kubelet[3522]: I0123 17:59:54.506453 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507442 kubelet[3522]: I0123 17:59:54.506578 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507442 kubelet[3522]: I0123 17:59:54.506637 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cni-path" (OuterVolumeSpecName: "cni-path") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507442 kubelet[3522]: I0123 17:59:54.506933 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507442 kubelet[3522]: I0123 17:59:54.506984 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.507960 kubelet[3522]: I0123 17:59:54.507023 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 17:59:54.514096 kubelet[3522]: I0123 17:59:54.514020 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6a6388-c9ad-4a5f-b211-144c970915f9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:59:54.515039 kubelet[3522]: I0123 17:59:54.514897 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:54.517639 kubelet[3522]: I0123 17:59:54.517430 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-kube-api-access-srgrv" (OuterVolumeSpecName: "kube-api-access-srgrv") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "kube-api-access-srgrv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:54.517787 kubelet[3522]: I0123 17:59:54.517658 3522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab6a6388-c9ad-4a5f-b211-144c970915f9" (UID: "ab6a6388-c9ad-4a5f-b211-144c970915f9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605293 3522 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-hubble-tls\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605341 3522 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-config-path\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605367 3522 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-etc-cni-netd\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605388 3522 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-xtables-lock\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605407 3522 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-lib-modules\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605428 3522 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab6a6388-c9ad-4a5f-b211-144c970915f9-clustermesh-secrets\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605447 3522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-srgrv\" (UniqueName: \"kubernetes.io/projected/ab6a6388-c9ad-4a5f-b211-144c970915f9-kube-api-access-srgrv\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.605625 kubelet[3522]: I0123 17:59:54.605468 3522 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-bpf-maps\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.606083 kubelet[3522]: I0123 17:59:54.605490 3522 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-kernel\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.606083 kubelet[3522]: I0123 17:59:54.605510 3522 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cni-path\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.606083 kubelet[3522]: I0123 17:59:54.605530 3522 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-run\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.606083 kubelet[3522]: I0123 17:59:54.605551 3522 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-cilium-cgroup\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.606083 kubelet[3522]: I0123 17:59:54.605573 3522 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-host-proc-sys-net\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:54.606083 kubelet[3522]: I0123 17:59:54.605594 3522 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab6a6388-c9ad-4a5f-b211-144c970915f9-hostproc\") on node \"ip-172-31-28-159\" DevicePath \"\"" Jan 23 17:59:55.010256 kubelet[3522]: I0123 17:59:55.010094 3522 scope.go:117] "RemoveContainer" containerID="6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff" Jan 23 17:59:55.018205 containerd[2018]: time="2026-01-23T17:59:55.017469168Z" level=info msg="RemoveContainer for \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\"" Jan 23 17:59:55.034297 systemd[1]: Removed slice kubepods-besteffort-pod3f227a00_f91f_409f_977b_8bd136e53bf6.slice - libcontainer container kubepods-besteffort-pod3f227a00_f91f_409f_977b_8bd136e53bf6.slice. Jan 23 17:59:55.041195 containerd[2018]: time="2026-01-23T17:59:55.041083896Z" level=info msg="RemoveContainer for \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" returns successfully" Jan 23 17:59:55.043600 systemd[1]: Removed slice kubepods-burstable-podab6a6388_c9ad_4a5f_b211_144c970915f9.slice - libcontainer container kubepods-burstable-podab6a6388_c9ad_4a5f_b211_144c970915f9.slice. Jan 23 17:59:55.043837 systemd[1]: kubepods-burstable-podab6a6388_c9ad_4a5f_b211_144c970915f9.slice: Consumed 14.613s CPU time, 125M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 17:59:55.044330 kubelet[3522]: I0123 17:59:55.044281 3522 scope.go:117] "RemoveContainer" containerID="6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff" Jan 23 17:59:55.046255 containerd[2018]: time="2026-01-23T17:59:55.045729192Z" level=error msg="ContainerStatus for \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\": not found" Jan 23 17:59:55.046482 kubelet[3522]: E0123 17:59:55.046066 3522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\": not found" containerID="6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff" Jan 23 17:59:55.046482 kubelet[3522]: I0123 17:59:55.046117 3522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff"} err="failed to get container status \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b19dc06a1a794ef91f694ab9347f2382b8bc0135ad107c9cbbdaa403cf2d1ff\": not found" Jan 23 17:59:55.046785 kubelet[3522]: I0123 17:59:55.046607 3522 scope.go:117] "RemoveContainer" containerID="e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c" Jan 23 17:59:55.075256 containerd[2018]: time="2026-01-23T17:59:55.052209708Z" level=info msg="RemoveContainer for \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\"" Jan 23 17:59:55.075256 containerd[2018]: time="2026-01-23T17:59:55.060385512Z" level=info msg="RemoveContainer for \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" returns successfully" Jan 23 17:59:55.075256 containerd[2018]: time="2026-01-23T17:59:55.063409584Z" level=info msg="RemoveContainer for \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\"" Jan 23 17:59:55.075256 containerd[2018]: time="2026-01-23T17:59:55.070080444Z" level=info msg="RemoveContainer for \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\" returns successfully" Jan 23 17:59:55.075256 containerd[2018]: time="2026-01-23T17:59:55.074792148Z" level=info msg="RemoveContainer for \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\"" Jan 23 17:59:55.075618 kubelet[3522]: I0123 17:59:55.060767 3522 scope.go:117] "RemoveContainer" containerID="e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b" Jan 23 17:59:55.075618 kubelet[3522]: I0123 17:59:55.070452 3522 scope.go:117] "RemoveContainer" containerID="54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1" Jan 23 17:59:55.084611 containerd[2018]: time="2026-01-23T17:59:55.084537528Z" level=info msg="RemoveContainer for \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\" returns successfully" Jan 23 17:59:55.085034 kubelet[3522]: I0123 17:59:55.084860 3522 scope.go:117] "RemoveContainer" containerID="92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240" Jan 23 17:59:55.089401 containerd[2018]: time="2026-01-23T17:59:55.089322144Z" level=info msg="RemoveContainer for \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\"" Jan 23 17:59:55.100341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c-shm.mount: Deactivated successfully. Jan 23 17:59:55.100562 systemd[1]: var-lib-kubelet-pods-3f227a00\x2df91f\x2d409f\x2d977b\x2d8bd136e53bf6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4qpqk.mount: Deactivated successfully. Jan 23 17:59:55.100729 systemd[1]: var-lib-kubelet-pods-ab6a6388\x2dc9ad\x2d4a5f\x2db211\x2d144c970915f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsrgrv.mount: Deactivated successfully. Jan 23 17:59:55.100881 systemd[1]: var-lib-kubelet-pods-ab6a6388\x2dc9ad\x2d4a5f\x2db211\x2d144c970915f9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 17:59:55.101026 systemd[1]: var-lib-kubelet-pods-ab6a6388\x2dc9ad\x2d4a5f\x2db211\x2d144c970915f9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 17:59:55.109964 containerd[2018]: time="2026-01-23T17:59:55.107955252Z" level=info msg="RemoveContainer for \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\" returns successfully" Jan 23 17:59:55.110111 kubelet[3522]: I0123 17:59:55.108570 3522 scope.go:117] "RemoveContainer" containerID="6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254" Jan 23 17:59:55.118217 containerd[2018]: time="2026-01-23T17:59:55.118118748Z" level=info msg="RemoveContainer for \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\"" Jan 23 17:59:55.124535 containerd[2018]: time="2026-01-23T17:59:55.124448232Z" level=info msg="RemoveContainer for \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\" returns successfully" Jan 23 17:59:55.124869 kubelet[3522]: I0123 17:59:55.124809 3522 scope.go:117] "RemoveContainer" containerID="e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c" Jan 23 17:59:55.125315 containerd[2018]: time="2026-01-23T17:59:55.125258928Z" level=error msg="ContainerStatus for \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\": not found" Jan 23 17:59:55.125756 kubelet[3522]: E0123 17:59:55.125710 3522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\": not found" containerID="e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c" Jan 23 17:59:55.125845 kubelet[3522]: I0123 17:59:55.125765 3522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c"} err="failed to get container status \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e99bea2dbe4d6242ba124fff213dbfe608b1e416900c1d23808e0c94e9b5858c\": not found" Jan 23 17:59:55.125845 kubelet[3522]: I0123 17:59:55.125799 3522 scope.go:117] "RemoveContainer" containerID="e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b" Jan 23 17:59:55.126256 containerd[2018]: time="2026-01-23T17:59:55.126165816Z" level=error msg="ContainerStatus for \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\": not found" Jan 23 17:59:55.126680 kubelet[3522]: E0123 17:59:55.126646 3522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\": not found" containerID="e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b" Jan 23 17:59:55.126861 kubelet[3522]: I0123 17:59:55.126816 3522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b"} err="failed to get container status \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3a8d01da2df1db85dec05af9ee37b5889f78e976acf6cb60ad03a83fab7364b\": not found" Jan 23 17:59:55.126984 kubelet[3522]: I0123 17:59:55.126950 3522 scope.go:117] "RemoveContainer" containerID="54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1" Jan 23 17:59:55.127794 containerd[2018]: time="2026-01-23T17:59:55.127722156Z" level=error msg="ContainerStatus for \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\": not found" Jan 23 17:59:55.128915 kubelet[3522]: E0123 17:59:55.128858 3522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\": not found" containerID="54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1" Jan 23 17:59:55.130099 kubelet[3522]: I0123 17:59:55.128921 3522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1"} err="failed to get container status \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"54a0db95390a578e83b094f12d4850476377898095f3e2628c98e49d97a6e4d1\": not found" Jan 23 17:59:55.130099 kubelet[3522]: I0123 17:59:55.128958 3522 scope.go:117] "RemoveContainer" containerID="92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240" Jan 23 17:59:55.130099 kubelet[3522]: E0123 17:59:55.130070 3522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\": not found" containerID="92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240" Jan 23 17:59:55.130552 containerd[2018]: time="2026-01-23T17:59:55.129440076Z" level=error msg="ContainerStatus for \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\": not found" Jan 23 17:59:55.131276 kubelet[3522]: I0123 17:59:55.130123 3522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240"} err="failed to get container status \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\": rpc error: code = NotFound desc = an error occurred when try to find container \"92d80e5740af7f10984dba8db6572c141db8bfcb6deb4459f7768b20507f1240\": not found" Jan 23 17:59:55.131276 kubelet[3522]: I0123 17:59:55.130158 3522 scope.go:117] "RemoveContainer" containerID="6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254" Jan 23 17:59:55.131276 kubelet[3522]: E0123 17:59:55.130827 3522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\": not found" containerID="6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254" Jan 23 17:59:55.131276 kubelet[3522]: I0123 17:59:55.130869 3522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254"} err="failed to get container status \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\": not found" Jan 23 17:59:55.131538 containerd[2018]: time="2026-01-23T17:59:55.130549740Z" level=error msg="ContainerStatus for \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e8cf2d335ae4231c4eb38a3b8c88364e6ecda3e33358ae35dfb8f1bd69f8254\": not found" Jan 23 17:59:55.511426 kubelet[3522]: I0123 17:59:55.511345 3522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f227a00-f91f-409f-977b-8bd136e53bf6" path="/var/lib/kubelet/pods/3f227a00-f91f-409f-977b-8bd136e53bf6/volumes" Jan 23 17:59:55.512471 kubelet[3522]: I0123 17:59:55.512429 3522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6a6388-c9ad-4a5f-b211-144c970915f9" path="/var/lib/kubelet/pods/ab6a6388-c9ad-4a5f-b211-144c970915f9/volumes" Jan 23 17:59:55.746082 kubelet[3522]: E0123 17:59:55.746013 3522 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 17:59:55.942692 sshd[5089]: Connection closed by 68.220.241.50 port 35014 Jan 23 17:59:55.944494 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:55.952671 systemd[1]: sshd@22-172.31.28.159:22-68.220.241.50:35014.service: Deactivated successfully. Jan 23 17:59:55.957596 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 17:59:55.960168 systemd[1]: session-23.scope: Consumed 2.594s CPU time, 23.6M memory peak. Jan 23 17:59:55.961619 systemd-logind[1983]: Session 23 logged out. Waiting for processes to exit. Jan 23 17:59:55.965844 systemd-logind[1983]: Removed session 23. Jan 23 17:59:56.038538 systemd[1]: Started sshd@23-172.31.28.159:22-68.220.241.50:42460.service - OpenSSH per-connection server daemon (68.220.241.50:42460). Jan 23 17:59:56.567471 sshd[5232]: Accepted publickey for core from 68.220.241.50 port 42460 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:56.569925 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:56.578032 systemd-logind[1983]: New session 24 of user core. Jan 23 17:59:56.584943 update_engine[1988]: I20260123 17:59:56.584230 1988 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:59:56.584943 update_engine[1988]: I20260123 17:59:56.584342 1988 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:59:56.584943 update_engine[1988]: I20260123 17:59:56.584883 1988 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:59:56.587456 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 17:59:56.591790 update_engine[1988]: E20260123 17:59:56.591584 1988 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:59:56.592015 update_engine[1988]: I20260123 17:59:56.591979 1988 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 17:59:56.753373 ntpd[2186]: Deleting 10 lxc_health, [fe80::b037:a9ff:febe:3e27%8]:123, stats: received=0, sent=0, dropped=0, active_time=75 secs Jan 23 17:59:56.753830 ntpd[2186]: 23 Jan 17:59:56 ntpd[2186]: Deleting 10 lxc_health, [fe80::b037:a9ff:febe:3e27%8]:123, stats: received=0, sent=0, dropped=0, active_time=75 secs Jan 23 17:59:58.255172 kubelet[3522]: I0123 17:59:58.253810 3522 setters.go:618] "Node became not ready" node="ip-172-31-28-159" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T17:59:58Z","lastTransitionTime":"2026-01-23T17:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 17:59:58.605037 sshd[5235]: Connection closed by 68.220.241.50 port 42460 Jan 23 17:59:58.605401 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:58.619693 systemd[1]: sshd@23-172.31.28.159:22-68.220.241.50:42460.service: Deactivated successfully. Jan 23 17:59:58.634704 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 17:59:58.636139 systemd[1]: session-24.scope: Consumed 1.587s CPU time, 25.6M memory peak. Jan 23 17:59:58.638268 systemd-logind[1983]: Session 24 logged out. Waiting for processes to exit. Jan 23 17:59:58.648776 systemd-logind[1983]: Removed session 24. Jan 23 17:59:58.668284 systemd[1]: Created slice kubepods-burstable-pod80aa36e7_a93a_4811_9ffe_dc7146761a0e.slice - libcontainer container kubepods-burstable-pod80aa36e7_a93a_4811_9ffe_dc7146761a0e.slice. Jan 23 17:59:58.708468 systemd[1]: Started sshd@24-172.31.28.159:22-68.220.241.50:42476.service - OpenSSH per-connection server daemon (68.220.241.50:42476). Jan 23 17:59:58.740221 kubelet[3522]: I0123 17:59:58.739983 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-xtables-lock\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.740221 kubelet[3522]: I0123 17:59:58.740053 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80aa36e7-a93a-4811-9ffe-dc7146761a0e-cilium-ipsec-secrets\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.740221 kubelet[3522]: I0123 17:59:58.740102 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-host-proc-sys-kernel\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.740221 kubelet[3522]: I0123 17:59:58.740139 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-hostproc\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.741429 kubelet[3522]: I0123 17:59:58.740178 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkcd\" (UniqueName: \"kubernetes.io/projected/80aa36e7-a93a-4811-9ffe-dc7146761a0e-kube-api-access-mwkcd\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.742404 kubelet[3522]: I0123 17:59:58.741913 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-cilium-run\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.742971 kubelet[3522]: I0123 17:59:58.742592 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-etc-cni-netd\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.744389 kubelet[3522]: I0123 17:59:58.744306 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80aa36e7-a93a-4811-9ffe-dc7146761a0e-clustermesh-secrets\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.744626 kubelet[3522]: I0123 17:59:58.744553 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-host-proc-sys-net\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.744757 kubelet[3522]: I0123 17:59:58.744728 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80aa36e7-a93a-4811-9ffe-dc7146761a0e-hubble-tls\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.745205 kubelet[3522]: I0123 17:59:58.744855 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-lib-modules\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.745767 kubelet[3522]: I0123 17:59:58.745360 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80aa36e7-a93a-4811-9ffe-dc7146761a0e-cilium-config-path\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.746042 kubelet[3522]: I0123 17:59:58.745594 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-cni-path\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.747038 kubelet[3522]: I0123 17:59:58.746979 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-bpf-maps\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.747905 kubelet[3522]: I0123 17:59:58.747261 3522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80aa36e7-a93a-4811-9ffe-dc7146761a0e-cilium-cgroup\") pod \"cilium-lb67d\" (UID: \"80aa36e7-a93a-4811-9ffe-dc7146761a0e\") " pod="kube-system/cilium-lb67d" Jan 23 17:59:58.983984 containerd[2018]: time="2026-01-23T17:59:58.983572292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lb67d,Uid:80aa36e7-a93a-4811-9ffe-dc7146761a0e,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:59.023206 containerd[2018]: time="2026-01-23T17:59:59.022953160Z" level=info msg="connecting to shim f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7" address="unix:///run/containerd/s/7ec2bee7273409abfca08aebbcf72c5136007975a95f89e1a0a5ca8f1bb0271e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:59.074670 systemd[1]: Started cri-containerd-f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7.scope - libcontainer container f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7. Jan 23 17:59:59.143684 containerd[2018]: time="2026-01-23T17:59:59.143607088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lb67d,Uid:80aa36e7-a93a-4811-9ffe-dc7146761a0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\"" Jan 23 17:59:59.157575 containerd[2018]: time="2026-01-23T17:59:59.157519744Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 17:59:59.171619 containerd[2018]: time="2026-01-23T17:59:59.171303137Z" level=info msg="Container 42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:59.182946 containerd[2018]: time="2026-01-23T17:59:59.182866169Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da\"" Jan 23 17:59:59.186023 containerd[2018]: time="2026-01-23T17:59:59.185001197Z" level=info msg="StartContainer for \"42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da\"" Jan 23 17:59:59.189287 containerd[2018]: time="2026-01-23T17:59:59.189157109Z" level=info msg="connecting to shim 42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da" address="unix:///run/containerd/s/7ec2bee7273409abfca08aebbcf72c5136007975a95f89e1a0a5ca8f1bb0271e" protocol=ttrpc version=3 Jan 23 17:59:59.225494 systemd[1]: Started cri-containerd-42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da.scope - libcontainer container 42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da. Jan 23 17:59:59.254501 sshd[5247]: Accepted publickey for core from 68.220.241.50 port 42476 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:59:59.261897 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:59.277264 systemd-logind[1983]: New session 25 of user core. Jan 23 17:59:59.283595 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 17:59:59.303479 containerd[2018]: time="2026-01-23T17:59:59.303131225Z" level=info msg="StartContainer for \"42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da\" returns successfully" Jan 23 17:59:59.318899 systemd[1]: cri-containerd-42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da.scope: Deactivated successfully. Jan 23 17:59:59.326842 containerd[2018]: time="2026-01-23T17:59:59.326782001Z" level=info msg="received container exit event container_id:\"42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da\" id:\"42fd9b9ab35db47b7d6788b6711d145f67103f7846b68b1d498c224ccb67b7da\" pid:5314 exited_at:{seconds:1769191199 nanos:326384033}" Jan 23 17:59:59.504975 kubelet[3522]: E0123 17:59:59.504783 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-xsxvn" podUID="63f62d1d-b440-4311-9a1e-fb7799bf78d9" Jan 23 17:59:59.609232 sshd[5327]: Connection closed by 68.220.241.50 port 42476 Jan 23 17:59:59.609520 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:59.616762 systemd[1]: sshd@24-172.31.28.159:22-68.220.241.50:42476.service: Deactivated successfully. Jan 23 17:59:59.621327 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 17:59:59.624962 systemd-logind[1983]: Session 25 logged out. Waiting for processes to exit. Jan 23 17:59:59.628425 systemd-logind[1983]: Removed session 25. Jan 23 17:59:59.712094 systemd[1]: Started sshd@25-172.31.28.159:22-68.220.241.50:42486.service - OpenSSH per-connection server daemon (68.220.241.50:42486). Jan 23 18:00:00.060505 containerd[2018]: time="2026-01-23T18:00:00.060434153Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:00:00.084346 containerd[2018]: time="2026-01-23T18:00:00.084027665Z" level=info msg="Container 05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:00.107739 containerd[2018]: time="2026-01-23T18:00:00.106792337Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128\"" Jan 23 18:00:00.110965 containerd[2018]: time="2026-01-23T18:00:00.110898917Z" level=info msg="StartContainer for \"05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128\"" Jan 23 18:00:00.114206 containerd[2018]: time="2026-01-23T18:00:00.114098273Z" level=info msg="connecting to shim 05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128" address="unix:///run/containerd/s/7ec2bee7273409abfca08aebbcf72c5136007975a95f89e1a0a5ca8f1bb0271e" protocol=ttrpc version=3 Jan 23 18:00:00.168536 systemd[1]: Started cri-containerd-05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128.scope - libcontainer container 05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128. Jan 23 18:00:00.287363 sshd[5353]: Accepted publickey for core from 68.220.241.50 port 42486 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:00.291769 containerd[2018]: time="2026-01-23T18:00:00.291618642Z" level=info msg="StartContainer for \"05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128\" returns successfully" Jan 23 18:00:00.294024 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:00.309342 systemd-logind[1983]: New session 26 of user core. Jan 23 18:00:00.312820 containerd[2018]: time="2026-01-23T18:00:00.312668910Z" level=info msg="received container exit event container_id:\"05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128\" id:\"05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128\" pid:5369 exited_at:{seconds:1769191200 nanos:311865594}" Jan 23 18:00:00.314631 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:00:00.315438 systemd[1]: cri-containerd-05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128.scope: Deactivated successfully. Jan 23 18:00:00.747793 kubelet[3522]: E0123 18:00:00.747554 3522 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 18:00:00.860300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05bc9a9bd17d8112647f3aef800b5e18c850293a23e0292fd25ea8dad9d95128-rootfs.mount: Deactivated successfully. Jan 23 18:00:01.067695 containerd[2018]: time="2026-01-23T18:00:01.067631934Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:00:01.091498 containerd[2018]: time="2026-01-23T18:00:01.088723866Z" level=info msg="Container d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:01.113406 containerd[2018]: time="2026-01-23T18:00:01.113352486Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af\"" Jan 23 18:00:01.115239 containerd[2018]: time="2026-01-23T18:00:01.114597510Z" level=info msg="StartContainer for \"d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af\"" Jan 23 18:00:01.117867 containerd[2018]: time="2026-01-23T18:00:01.117727818Z" level=info msg="connecting to shim d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af" address="unix:///run/containerd/s/7ec2bee7273409abfca08aebbcf72c5136007975a95f89e1a0a5ca8f1bb0271e" protocol=ttrpc version=3 Jan 23 18:00:01.159503 systemd[1]: Started cri-containerd-d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af.scope - libcontainer container d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af. Jan 23 18:00:01.298509 containerd[2018]: time="2026-01-23T18:00:01.298421683Z" level=info msg="StartContainer for \"d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af\" returns successfully" Jan 23 18:00:01.299438 systemd[1]: cri-containerd-d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af.scope: Deactivated successfully. Jan 23 18:00:01.308353 containerd[2018]: time="2026-01-23T18:00:01.307527295Z" level=info msg="received container exit event container_id:\"d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af\" id:\"d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af\" pid:5420 exited_at:{seconds:1769191201 nanos:307154587}" Jan 23 18:00:01.356620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0044a8a027b4a2ddb3a404c113451a427ed5d3f68e7e4c507d6ca40a5c4e1af-rootfs.mount: Deactivated successfully. Jan 23 18:00:01.504893 kubelet[3522]: E0123 18:00:01.504825 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-xsxvn" podUID="63f62d1d-b440-4311-9a1e-fb7799bf78d9" Jan 23 18:00:02.077328 containerd[2018]: time="2026-01-23T18:00:02.077230195Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:00:02.105246 containerd[2018]: time="2026-01-23T18:00:02.103584859Z" level=info msg="Container 38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:02.127753 containerd[2018]: time="2026-01-23T18:00:02.127691551Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34\"" Jan 23 18:00:02.128523 containerd[2018]: time="2026-01-23T18:00:02.128485819Z" level=info msg="StartContainer for \"38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34\"" Jan 23 18:00:02.130715 containerd[2018]: time="2026-01-23T18:00:02.130639135Z" level=info msg="connecting to shim 38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34" address="unix:///run/containerd/s/7ec2bee7273409abfca08aebbcf72c5136007975a95f89e1a0a5ca8f1bb0271e" protocol=ttrpc version=3 Jan 23 18:00:02.172481 systemd[1]: Started cri-containerd-38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34.scope - libcontainer container 38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34. Jan 23 18:00:02.242346 systemd[1]: cri-containerd-38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34.scope: Deactivated successfully. Jan 23 18:00:02.246210 containerd[2018]: time="2026-01-23T18:00:02.246016796Z" level=info msg="received container exit event container_id:\"38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34\" id:\"38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34\" pid:5462 exited_at:{seconds:1769191202 nanos:245737892}" Jan 23 18:00:02.261041 containerd[2018]: time="2026-01-23T18:00:02.260969672Z" level=info msg="StartContainer for \"38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34\" returns successfully" Jan 23 18:00:02.287634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38809898974d55d914c0016d8ba4fb0b83c7f596686c69aa63e434d4b8e14d34-rootfs.mount: Deactivated successfully. Jan 23 18:00:03.089309 containerd[2018]: time="2026-01-23T18:00:03.089247440Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:00:03.114520 containerd[2018]: time="2026-01-23T18:00:03.114458636Z" level=info msg="Container 141c68b4eb476962d913985d34d19d0db4085c8bc90d2ef00d119d87d87f66d1: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:03.139793 containerd[2018]: time="2026-01-23T18:00:03.139645040Z" level=info msg="CreateContainer within sandbox \"f234f52231a04541cb4922ab6e7ca106a3145463e9a7481899b18a6dcef229e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"141c68b4eb476962d913985d34d19d0db4085c8bc90d2ef00d119d87d87f66d1\"" Jan 23 18:00:03.142733 containerd[2018]: time="2026-01-23T18:00:03.142576448Z" level=info msg="StartContainer for \"141c68b4eb476962d913985d34d19d0db4085c8bc90d2ef00d119d87d87f66d1\"" Jan 23 18:00:03.145820 containerd[2018]: time="2026-01-23T18:00:03.145763240Z" level=info msg="connecting to shim 141c68b4eb476962d913985d34d19d0db4085c8bc90d2ef00d119d87d87f66d1" address="unix:///run/containerd/s/7ec2bee7273409abfca08aebbcf72c5136007975a95f89e1a0a5ca8f1bb0271e" protocol=ttrpc version=3 Jan 23 18:00:03.185527 systemd[1]: Started cri-containerd-141c68b4eb476962d913985d34d19d0db4085c8bc90d2ef00d119d87d87f66d1.scope - libcontainer container 141c68b4eb476962d913985d34d19d0db4085c8bc90d2ef00d119d87d87f66d1. Jan 23 18:00:03.268520 containerd[2018]: time="2026-01-23T18:00:03.268260093Z" level=info msg="StartContainer for \"141c68b4eb476962d913985d34d19d0db4085c8bc90d2ef00d119d87d87f66d1\" returns successfully" Jan 23 18:00:03.507399 kubelet[3522]: E0123 18:00:03.506878 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-xsxvn" podUID="63f62d1d-b440-4311-9a1e-fb7799bf78d9" Jan 23 18:00:04.101254 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 18:00:04.153943 kubelet[3522]: I0123 18:00:04.153731 3522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lb67d" podStartSLOduration=6.153706029 podStartE2EDuration="6.153706029s" podCreationTimestamp="2026-01-23 17:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:00:04.150771309 +0000 UTC m=+118.963892548" watchObservedRunningTime="2026-01-23 18:00:04.153706029 +0000 UTC m=+118.966826812" Jan 23 18:00:05.503030 containerd[2018]: time="2026-01-23T18:00:05.502966308Z" level=info msg="StopPodSandbox for \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\"" Jan 23 18:00:05.503771 containerd[2018]: time="2026-01-23T18:00:05.503154636Z" level=info msg="TearDown network for sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" successfully" Jan 23 18:00:05.503771 containerd[2018]: time="2026-01-23T18:00:05.503200260Z" level=info msg="StopPodSandbox for \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" returns successfully" Jan 23 18:00:05.506206 containerd[2018]: time="2026-01-23T18:00:05.505974516Z" level=info msg="RemovePodSandbox for \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\"" Jan 23 18:00:05.506206 containerd[2018]: time="2026-01-23T18:00:05.506146032Z" level=info msg="Forcibly stopping sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\"" Jan 23 18:00:05.506974 containerd[2018]: time="2026-01-23T18:00:05.506895948Z" level=info msg="TearDown network for sandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" successfully" Jan 23 18:00:05.509794 kubelet[3522]: E0123 18:00:05.509708 3522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-xsxvn" podUID="63f62d1d-b440-4311-9a1e-fb7799bf78d9" Jan 23 18:00:05.511276 containerd[2018]: time="2026-01-23T18:00:05.510728112Z" level=info msg="Ensure that sandbox 55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c in task-service has been cleanup successfully" Jan 23 18:00:05.518200 containerd[2018]: time="2026-01-23T18:00:05.518085612Z" level=info msg="RemovePodSandbox \"55dec41bf242572bb948619680e8c07f894fb800374df6903da5e8e346bb557c\" returns successfully" Jan 23 18:00:05.519671 containerd[2018]: time="2026-01-23T18:00:05.519481896Z" level=info msg="StopPodSandbox for \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\"" Jan 23 18:00:05.520066 containerd[2018]: time="2026-01-23T18:00:05.519998496Z" level=info msg="TearDown network for sandbox \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" successfully" Jan 23 18:00:05.520066 containerd[2018]: time="2026-01-23T18:00:05.520047780Z" level=info msg="StopPodSandbox for \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" returns successfully" Jan 23 18:00:05.520951 containerd[2018]: time="2026-01-23T18:00:05.520903104Z" level=info msg="RemovePodSandbox for \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\"" Jan 23 18:00:05.521049 containerd[2018]: time="2026-01-23T18:00:05.520964016Z" level=info msg="Forcibly stopping sandbox \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\"" Jan 23 18:00:05.521153 containerd[2018]: time="2026-01-23T18:00:05.521114292Z" level=info msg="TearDown network for sandbox \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" successfully" Jan 23 18:00:05.523062 containerd[2018]: time="2026-01-23T18:00:05.522990528Z" level=info msg="Ensure that sandbox 4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288 in task-service has been cleanup successfully" Jan 23 18:00:05.529890 containerd[2018]: time="2026-01-23T18:00:05.529808448Z" level=info msg="RemovePodSandbox \"4320c9fd233aec87bbedc0af4cbcca5ea990f235c02d48d8ef6d054074497288\" returns successfully" Jan 23 18:00:06.587677 update_engine[1988]: I20260123 18:00:06.587594 1988 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:00:06.589490 update_engine[1988]: I20260123 18:00:06.588353 1988 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:00:06.589490 update_engine[1988]: I20260123 18:00:06.588968 1988 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:00:06.616690 update_engine[1988]: E20260123 18:00:06.616618 1988 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 18:00:06.617040 update_engine[1988]: I20260123 18:00:06.616986 1988 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 18:00:08.518991 systemd-networkd[1830]: lxc_health: Link UP Jan 23 18:00:08.541708 (udev-worker)[6047]: Network interface NamePolicy= disabled on kernel command line. Jan 23 18:00:08.552126 systemd-networkd[1830]: lxc_health: Gained carrier Jan 23 18:00:10.001321 systemd-networkd[1830]: lxc_health: Gained IPv6LL Jan 23 18:00:12.753693 ntpd[2186]: Listen normally on 13 lxc_health [fe80::14c5:32ff:fee6:2f72%14]:123 Jan 23 18:00:12.754203 ntpd[2186]: 23 Jan 18:00:12 ntpd[2186]: Listen normally on 13 lxc_health [fe80::14c5:32ff:fee6:2f72%14]:123 Jan 23 18:00:14.299749 sshd[5389]: Connection closed by 68.220.241.50 port 42486 Jan 23 18:00:14.301500 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:14.312495 systemd[1]: sshd@25-172.31.28.159:22-68.220.241.50:42486.service: Deactivated successfully. Jan 23 18:00:14.319139 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:00:14.322629 systemd-logind[1983]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:00:14.326909 systemd-logind[1983]: Removed session 26. Jan 23 18:00:16.588236 update_engine[1988]: I20260123 18:00:16.586226 1988 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:00:16.588236 update_engine[1988]: I20260123 18:00:16.586330 1988 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:00:16.588236 update_engine[1988]: I20260123 18:00:16.586859 1988 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:00:16.594317 update_engine[1988]: E20260123 18:00:16.594231 1988 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 18:00:16.594472 update_engine[1988]: I20260123 18:00:16.594385 1988 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 18:00:16.594472 update_engine[1988]: I20260123 18:00:16.594409 1988 omaha_request_action.cc:617] Omaha request response: Jan 23 18:00:16.594589 update_engine[1988]: E20260123 18:00:16.594527 1988 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 18:00:16.594589 update_engine[1988]: I20260123 18:00:16.594559 1988 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 18:00:16.594589 update_engine[1988]: I20260123 18:00:16.594575 1988 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:00:16.594740 update_engine[1988]: I20260123 18:00:16.594589 1988 update_attempter.cc:306] Processing Done. Jan 23 18:00:16.594740 update_engine[1988]: E20260123 18:00:16.594615 1988 update_attempter.cc:619] Update failed. Jan 23 18:00:16.594740 update_engine[1988]: I20260123 18:00:16.594630 1988 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 18:00:16.594740 update_engine[1988]: I20260123 18:00:16.594642 1988 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 18:00:16.594740 update_engine[1988]: I20260123 18:00:16.594656 1988 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 18:00:16.594964 update_engine[1988]: I20260123 18:00:16.594762 1988 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 18:00:16.594964 update_engine[1988]: I20260123 18:00:16.594800 1988 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 18:00:16.594964 update_engine[1988]: I20260123 18:00:16.594817 1988 omaha_request_action.cc:272] Request: Jan 23 18:00:16.594964 update_engine[1988]: Jan 23 18:00:16.594964 update_engine[1988]: Jan 23 18:00:16.594964 update_engine[1988]: Jan 23 18:00:16.594964 update_engine[1988]: Jan 23 18:00:16.594964 update_engine[1988]: Jan 23 18:00:16.594964 update_engine[1988]: Jan 23 18:00:16.594964 update_engine[1988]: I20260123 18:00:16.594833 1988 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 18:00:16.594964 update_engine[1988]: I20260123 18:00:16.594871 1988 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 18:00:16.595589 update_engine[1988]: I20260123 18:00:16.595358 1988 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 18:00:16.597214 locksmithd[2035]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 18:00:16.616861 update_engine[1988]: E20260123 18:00:16.616772 1988 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 18:00:16.617009 update_engine[1988]: I20260123 18:00:16.616923 1988 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 18:00:16.617009 update_engine[1988]: I20260123 18:00:16.616945 1988 omaha_request_action.cc:617] Omaha request response: Jan 23 18:00:16.617009 update_engine[1988]: I20260123 18:00:16.616961 1988 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:00:16.617009 update_engine[1988]: I20260123 18:00:16.616981 1988 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 18:00:16.617009 update_engine[1988]: I20260123 18:00:16.616995 1988 update_attempter.cc:306] Processing Done. Jan 23 18:00:16.617282 update_engine[1988]: I20260123 18:00:16.617009 1988 update_attempter.cc:310] Error event sent. Jan 23 18:00:16.617282 update_engine[1988]: I20260123 18:00:16.617029 1988 update_check_scheduler.cc:74] Next update check in 49m2s Jan 23 18:00:16.618227 locksmithd[2035]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 18:00:28.201397 kubelet[3522]: E0123 18:00:28.201300 3522 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-159?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:00:28.588981 systemd[1]: cri-containerd-755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f.scope: Deactivated successfully. Jan 23 18:00:28.592000 systemd[1]: cri-containerd-755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f.scope: Consumed 4.720s CPU time, 55M memory peak. Jan 23 18:00:28.597663 containerd[2018]: time="2026-01-23T18:00:28.597156419Z" level=info msg="received container exit event container_id:\"755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f\" id:\"755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f\" pid:3162 exit_status:1 exited_at:{seconds:1769191228 nanos:596566355}" Jan 23 18:00:28.637702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f-rootfs.mount: Deactivated successfully. Jan 23 18:00:29.194790 kubelet[3522]: I0123 18:00:29.194741 3522 scope.go:117] "RemoveContainer" containerID="755fca6040b64a02e109fb4903a00e8206f595793c5d3dcd10d8732e4747684f" Jan 23 18:00:29.199301 containerd[2018]: time="2026-01-23T18:00:29.199241554Z" level=info msg="CreateContainer within sandbox \"be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:00:29.214918 containerd[2018]: time="2026-01-23T18:00:29.214598374Z" level=info msg="Container 2c56ddab744b022b855d3126970c24e36ce21747967bd06ce0fd65ea86e68f0c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:29.232266 containerd[2018]: time="2026-01-23T18:00:29.232216906Z" level=info msg="CreateContainer within sandbox \"be4727d12724f7157055b67756c1415e20682ad0df5a886989756572bc8ce7ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2c56ddab744b022b855d3126970c24e36ce21747967bd06ce0fd65ea86e68f0c\"" Jan 23 18:00:29.234234 containerd[2018]: time="2026-01-23T18:00:29.233549566Z" level=info msg="StartContainer for \"2c56ddab744b022b855d3126970c24e36ce21747967bd06ce0fd65ea86e68f0c\"" Jan 23 18:00:29.235793 containerd[2018]: time="2026-01-23T18:00:29.235746898Z" level=info msg="connecting to shim 2c56ddab744b022b855d3126970c24e36ce21747967bd06ce0fd65ea86e68f0c" address="unix:///run/containerd/s/9f9c4bb8c5765282b16da1fcf6ce4e992d775c2038a95316930462986f03c34b" protocol=ttrpc version=3 Jan 23 18:00:29.276484 systemd[1]: Started cri-containerd-2c56ddab744b022b855d3126970c24e36ce21747967bd06ce0fd65ea86e68f0c.scope - libcontainer container 2c56ddab744b022b855d3126970c24e36ce21747967bd06ce0fd65ea86e68f0c. Jan 23 18:00:29.362008 containerd[2018]: time="2026-01-23T18:00:29.361957223Z" level=info msg="StartContainer for \"2c56ddab744b022b855d3126970c24e36ce21747967bd06ce0fd65ea86e68f0c\" returns successfully" Jan 23 18:00:34.658071 systemd[1]: cri-containerd-76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340.scope: Deactivated successfully. Jan 23 18:00:34.659710 systemd[1]: cri-containerd-76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340.scope: Consumed 6.233s CPU time, 20.7M memory peak. Jan 23 18:00:34.664248 containerd[2018]: time="2026-01-23T18:00:34.664071101Z" level=info msg="received container exit event container_id:\"76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340\" id:\"76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340\" pid:3179 exit_status:1 exited_at:{seconds:1769191234 nanos:662826269}" Jan 23 18:00:34.713112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340-rootfs.mount: Deactivated successfully. Jan 23 18:00:35.217751 kubelet[3522]: I0123 18:00:35.217689 3522 scope.go:117] "RemoveContainer" containerID="76bb1ac7a7eb6621f37112253648079c5aa695832771e931644c836752fae340" Jan 23 18:00:35.221228 containerd[2018]: time="2026-01-23T18:00:35.220760404Z" level=info msg="CreateContainer within sandbox \"b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:00:35.236013 containerd[2018]: time="2026-01-23T18:00:35.234018892Z" level=info msg="Container e2af956d92356a4d0b769d94c6411e8fc4c40a98a0be372540c7ca2723f33e0c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:35.253639 containerd[2018]: time="2026-01-23T18:00:35.253566076Z" level=info msg="CreateContainer within sandbox \"b0dc0a62db34907a52034271d5833a63058531a100a9aebc49a3aaa218be695f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e2af956d92356a4d0b769d94c6411e8fc4c40a98a0be372540c7ca2723f33e0c\"" Jan 23 18:00:35.255224 containerd[2018]: time="2026-01-23T18:00:35.254267812Z" level=info msg="StartContainer for \"e2af956d92356a4d0b769d94c6411e8fc4c40a98a0be372540c7ca2723f33e0c\"" Jan 23 18:00:35.256484 containerd[2018]: time="2026-01-23T18:00:35.256431820Z" level=info msg="connecting to shim e2af956d92356a4d0b769d94c6411e8fc4c40a98a0be372540c7ca2723f33e0c" address="unix:///run/containerd/s/314a039c93d48bbc40fa82885827050be3c4d03f94bd296a8019e9af48971a91" protocol=ttrpc version=3 Jan 23 18:00:35.295514 systemd[1]: Started cri-containerd-e2af956d92356a4d0b769d94c6411e8fc4c40a98a0be372540c7ca2723f33e0c.scope - libcontainer container e2af956d92356a4d0b769d94c6411e8fc4c40a98a0be372540c7ca2723f33e0c. Jan 23 18:00:35.378506 containerd[2018]: time="2026-01-23T18:00:35.378443608Z" level=info msg="StartContainer for \"e2af956d92356a4d0b769d94c6411e8fc4c40a98a0be372540c7ca2723f33e0c\" returns successfully" Jan 23 18:00:38.202781 kubelet[3522]: E0123 18:00:38.201858 3522 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-159?timeout=10s\": context deadline exceeded"