Jan 23 23:56:05.235580 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:56:05.235627 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:56:05.235653 kernel: KASLR disabled due to lack of seed Jan 23 23:56:05.235670 kernel: efi: EFI v2.7 by EDK II Jan 23 23:56:05.235688 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:56:05.235704 kernel: ACPI: Early table checksum verification disabled Jan 23 23:56:05.235722 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:56:05.235737 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:56:05.235754 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:56:05.235769 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:56:05.235790 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:56:05.235805 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:56:05.235822 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:56:05.235838 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:56:05.235856 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:56:05.235878 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:56:05.235895 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:56:05.235911 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:56:05.235928 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:56:05.235944 kernel: printk: bootconsole [uart0] enabled Jan 23 23:56:05.235961 kernel: NUMA: Failed to initialise from firmware Jan 23 23:56:05.235978 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:56:05.235995 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:56:05.236081 kernel: Zone ranges: Jan 23 23:56:05.236101 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:56:05.236117 kernel: DMA32 empty Jan 23 23:56:05.236141 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:56:05.236158 kernel: Movable zone start for each node Jan 23 23:56:05.236175 kernel: Early memory node ranges Jan 23 23:56:05.236191 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:56:05.236208 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:56:05.236224 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:56:05.236240 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:56:05.236257 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:56:05.236273 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:56:05.236289 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:56:05.236306 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:56:05.236322 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:56:05.236344 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:56:05.236361 kernel: psci: probing for conduit method from ACPI. Jan 23 23:56:05.236386 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:56:05.236405 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:56:05.236422 kernel: psci: Trusted OS migration not required Jan 23 23:56:05.236444 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:56:05.236462 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:56:05.236480 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:56:05.236497 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:56:05.236515 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:56:05.236532 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:56:05.236571 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:56:05.236590 kernel: CPU features: detected: Spectre-v2 Jan 23 23:56:05.236607 kernel: CPU features: detected: Spectre-v3a Jan 23 23:56:05.236625 kernel: CPU features: detected: Spectre-BHB Jan 23 23:56:05.236642 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:56:05.236666 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:56:05.236684 kernel: alternatives: applying boot alternatives Jan 23 23:56:05.236704 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:05.236722 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:56:05.236740 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:56:05.236758 kernel: Fallback order for Node 0: 0 Jan 23 23:56:05.236776 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:56:05.236793 kernel: Policy zone: Normal Jan 23 23:56:05.236810 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:56:05.236828 kernel: software IO TLB: area num 2. Jan 23 23:56:05.236845 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:56:05.236869 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:56:05.236887 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:56:05.236904 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:56:05.236923 kernel: rcu: RCU event tracing is enabled. Jan 23 23:56:05.236940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:56:05.236958 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:56:05.236976 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:56:05.236993 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:56:05.237034 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:56:05.237054 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:56:05.237072 kernel: GICv3: 96 SPIs implemented Jan 23 23:56:05.237096 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:56:05.237114 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:56:05.237132 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:56:05.237150 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:56:05.237168 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:56:05.237185 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:56:05.237204 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:56:05.237221 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:56:05.237238 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:56:05.237256 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:56:05.237273 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:56:05.237291 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:56:05.237314 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:56:05.237332 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:56:05.237350 kernel: Console: colour dummy device 80x25 Jan 23 23:56:05.237368 kernel: printk: console [tty1] enabled Jan 23 23:56:05.237387 kernel: ACPI: Core revision 20230628 Jan 23 23:56:05.237405 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:56:05.237423 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:56:05.237441 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:56:05.237459 kernel: landlock: Up and running. Jan 23 23:56:05.237482 kernel: SELinux: Initializing. Jan 23 23:56:05.237501 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.237521 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.237562 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:05.237607 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:05.237646 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:56:05.237682 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:56:05.237702 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:56:05.237720 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:56:05.237745 kernel: Remapping and enabling EFI services. Jan 23 23:56:05.237764 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:56:05.237781 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:56:05.237799 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:56:05.237817 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:56:05.237835 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:56:05.237853 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:56:05.237870 kernel: SMP: Total of 2 processors activated. Jan 23 23:56:05.237888 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:56:05.237910 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:56:05.237928 kernel: CPU features: detected: CRC32 instructions Jan 23 23:56:05.237945 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:56:05.237975 kernel: alternatives: applying system-wide alternatives Jan 23 23:56:05.237998 kernel: devtmpfs: initialized Jan 23 23:56:05.238041 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:56:05.238060 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:56:05.238079 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:56:05.238098 kernel: SMBIOS 3.0.0 present. Jan 23 23:56:05.238124 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:56:05.238143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:56:05.238161 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:56:05.238180 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:56:05.238199 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:56:05.238218 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:56:05.238236 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jan 23 23:56:05.238255 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:56:05.238278 kernel: cpuidle: using governor menu Jan 23 23:56:05.238297 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:56:05.238315 kernel: ASID allocator initialised with 65536 entries Jan 23 23:56:05.238333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:56:05.238352 kernel: Serial: AMBA PL011 UART driver Jan 23 23:56:05.238370 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:56:05.238388 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:56:05.238407 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:56:05.238425 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:56:05.238448 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:56:05.238466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:56:05.238485 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:56:05.238503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:56:05.238521 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:56:05.238539 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:56:05.238557 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:56:05.238576 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:56:05.238594 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:56:05.238617 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:56:05.238635 kernel: ACPI: Interpreter enabled Jan 23 23:56:05.238653 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:56:05.238671 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:56:05.238689 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:56:05.239025 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:56:05.239248 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:56:05.239449 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:56:05.239661 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:56:05.239937 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:56:05.239965 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:56:05.239984 kernel: acpiphp: Slot [1] registered Jan 23 23:56:05.240035 kernel: acpiphp: Slot [2] registered Jan 23 23:56:05.240060 kernel: acpiphp: Slot [3] registered Jan 23 23:56:05.240079 kernel: acpiphp: Slot [4] registered Jan 23 23:56:05.240098 kernel: acpiphp: Slot [5] registered Jan 23 23:56:05.240124 kernel: acpiphp: Slot [6] registered Jan 23 23:56:05.240143 kernel: acpiphp: Slot [7] registered Jan 23 23:56:05.240161 kernel: acpiphp: Slot [8] registered Jan 23 23:56:05.240180 kernel: acpiphp: Slot [9] registered Jan 23 23:56:05.240198 kernel: acpiphp: Slot [10] registered Jan 23 23:56:05.240216 kernel: acpiphp: Slot [11] registered Jan 23 23:56:05.240235 kernel: acpiphp: Slot [12] registered Jan 23 23:56:05.240254 kernel: acpiphp: Slot [13] registered Jan 23 23:56:05.240272 kernel: acpiphp: Slot [14] registered Jan 23 23:56:05.240290 kernel: acpiphp: Slot [15] registered Jan 23 23:56:05.240314 kernel: acpiphp: Slot [16] registered Jan 23 23:56:05.240332 kernel: acpiphp: Slot [17] registered Jan 23 23:56:05.240351 kernel: acpiphp: Slot [18] registered Jan 23 23:56:05.242072 kernel: acpiphp: Slot [19] registered Jan 23 23:56:05.242115 kernel: acpiphp: Slot [20] registered Jan 23 23:56:05.242134 kernel: acpiphp: Slot [21] registered Jan 23 23:56:05.242153 kernel: acpiphp: Slot [22] registered Jan 23 23:56:05.242171 kernel: acpiphp: Slot [23] registered Jan 23 23:56:05.242190 kernel: acpiphp: Slot [24] registered Jan 23 23:56:05.242219 kernel: acpiphp: Slot [25] registered Jan 23 23:56:05.242238 kernel: acpiphp: Slot [26] registered Jan 23 23:56:05.242256 kernel: acpiphp: Slot [27] registered Jan 23 23:56:05.242275 kernel: acpiphp: Slot [28] registered Jan 23 23:56:05.242294 kernel: acpiphp: Slot [29] registered Jan 23 23:56:05.242312 kernel: acpiphp: Slot [30] registered Jan 23 23:56:05.242331 kernel: acpiphp: Slot [31] registered Jan 23 23:56:05.242349 kernel: PCI host bridge to bus 0000:00 Jan 23 23:56:05.242620 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:56:05.242816 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:56:05.244095 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:56:05.244429 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:56:05.244717 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:56:05.244962 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:56:05.246302 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:56:05.246565 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:56:05.246773 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:56:05.246979 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:56:05.249340 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:56:05.249564 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:56:05.249769 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:56:05.249971 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:56:05.250211 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:56:05.250411 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:56:05.250593 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:56:05.250781 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:56:05.250807 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:56:05.250826 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:56:05.250845 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:56:05.250864 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:56:05.250889 kernel: iommu: Default domain type: Translated Jan 23 23:56:05.250909 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:56:05.250927 kernel: efivars: Registered efivars operations Jan 23 23:56:05.250945 kernel: vgaarb: loaded Jan 23 23:56:05.250964 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:56:05.250983 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:56:05.254737 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:56:05.254789 kernel: pnp: PnP ACPI init Jan 23 23:56:05.255167 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:56:05.255210 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:56:05.255230 kernel: NET: Registered PF_INET protocol family Jan 23 23:56:05.255250 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:56:05.255270 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:56:05.255289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:56:05.255308 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:56:05.255327 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:56:05.255346 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:56:05.255369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.255389 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:05.255408 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:56:05.255427 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:56:05.255445 kernel: kvm [1]: HYP mode not available Jan 23 23:56:05.255464 kernel: Initialise system trusted keyrings Jan 23 23:56:05.255482 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:56:05.255501 kernel: Key type asymmetric registered Jan 23 23:56:05.255521 kernel: Asymmetric key parser 'x509' registered Jan 23 23:56:05.255544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:56:05.255565 kernel: io scheduler mq-deadline registered Jan 23 23:56:05.255584 kernel: io scheduler kyber registered Jan 23 23:56:05.255603 kernel: io scheduler bfq registered Jan 23 23:56:05.255837 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:56:05.255867 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:56:05.255887 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:56:05.255906 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:56:05.255925 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:56:05.255951 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:56:05.255971 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:56:05.256218 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:56:05.256247 kernel: printk: console [ttyS0] disabled Jan 23 23:56:05.256266 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:56:05.256286 kernel: printk: console [ttyS0] enabled Jan 23 23:56:05.256305 kernel: printk: bootconsole [uart0] disabled Jan 23 23:56:05.256323 kernel: thunder_xcv, ver 1.0 Jan 23 23:56:05.256342 kernel: thunder_bgx, ver 1.0 Jan 23 23:56:05.256369 kernel: nicpf, ver 1.0 Jan 23 23:56:05.256388 kernel: nicvf, ver 1.0 Jan 23 23:56:05.256641 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:56:05.256841 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:56:04 UTC (1769212564) Jan 23 23:56:05.256867 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:56:05.256887 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:56:05.256906 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:56:05.256925 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:56:05.256951 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:56:05.256970 kernel: Segment Routing with IPv6 Jan 23 23:56:05.256989 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:56:05.261073 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:56:05.261105 kernel: Key type dns_resolver registered Jan 23 23:56:05.261126 kernel: registered taskstats version 1 Jan 23 23:56:05.261145 kernel: Loading compiled-in X.509 certificates Jan 23 23:56:05.261164 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:56:05.261183 kernel: Key type .fscrypt registered Jan 23 23:56:05.261211 kernel: Key type fscrypt-provisioning registered Jan 23 23:56:05.261230 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:56:05.261249 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:56:05.261269 kernel: ima: No architecture policies found Jan 23 23:56:05.261288 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:56:05.261306 kernel: clk: Disabling unused clocks Jan 23 23:56:05.261325 kernel: Freeing unused kernel memory: 39424K Jan 23 23:56:05.261343 kernel: Run /init as init process Jan 23 23:56:05.261362 kernel: with arguments: Jan 23 23:56:05.261385 kernel: /init Jan 23 23:56:05.261403 kernel: with environment: Jan 23 23:56:05.261421 kernel: HOME=/ Jan 23 23:56:05.261440 kernel: TERM=linux Jan 23 23:56:05.261464 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:56:05.261489 systemd[1]: Detected virtualization amazon. Jan 23 23:56:05.261510 systemd[1]: Detected architecture arm64. Jan 23 23:56:05.261530 systemd[1]: Running in initrd. Jan 23 23:56:05.261555 systemd[1]: No hostname configured, using default hostname. Jan 23 23:56:05.261575 systemd[1]: Hostname set to . Jan 23 23:56:05.261596 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:56:05.261616 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:56:05.261637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:05.261657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:05.261679 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:56:05.261700 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:56:05.261726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:56:05.261747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:56:05.261770 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:56:05.261792 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:56:05.261812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:05.261833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:05.261858 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:05.261880 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:56:05.261900 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:56:05.261920 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:05.261941 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:05.261962 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:05.261982 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:56:05.262023 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:56:05.262050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:05.262078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:05.262099 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:05.262119 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:05.262140 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:56:05.262161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:56:05.262181 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:56:05.262202 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:56:05.262222 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:56:05.262242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:56:05.262268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:05.262288 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:05.262309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:05.262379 systemd-journald[251]: Collecting audit messages is disabled. Jan 23 23:56:05.262430 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:56:05.262453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:56:05.262473 systemd-journald[251]: Journal started Jan 23 23:56:05.262515 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2ba0574a90fd77e10a8b38051f68b2) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:56:05.238138 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:56:05.271832 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:56:05.281274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:05.288873 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:56:05.304755 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:56:05.305067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:05.313133 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:56:05.315340 kernel: Bridge firewalling registered Jan 23 23:56:05.319965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:05.328261 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:05.340468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:05.356564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:56:05.367089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:05.393422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:05.404387 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:05.407041 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:05.425259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:05.433812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:56:05.468720 dracut-cmdline[290]: dracut-dracut-053 Jan 23 23:56:05.478497 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:05.510040 systemd-resolved[283]: Positive Trust Anchors: Jan 23 23:56:05.510073 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:05.510135 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:05.624028 kernel: SCSI subsystem initialized Jan 23 23:56:05.629035 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:56:05.642043 kernel: iscsi: registered transport (tcp) Jan 23 23:56:05.665046 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:56:05.665115 kernel: QLogic iSCSI HBA Driver Jan 23 23:56:05.741046 kernel: random: crng init done Jan 23 23:56:05.741569 systemd-resolved[283]: Defaulting to hostname 'linux'. Jan 23 23:56:05.745814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:05.748549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:05.776096 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:05.786349 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:56:05.823359 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:56:05.823437 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:56:05.823465 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:56:05.891051 kernel: raid6: neonx8 gen() 6637 MB/s Jan 23 23:56:05.908045 kernel: raid6: neonx4 gen() 6490 MB/s Jan 23 23:56:05.926048 kernel: raid6: neonx2 gen() 5408 MB/s Jan 23 23:56:05.942040 kernel: raid6: neonx1 gen() 3950 MB/s Jan 23 23:56:05.959038 kernel: raid6: int64x8 gen() 3796 MB/s Jan 23 23:56:05.976040 kernel: raid6: int64x4 gen() 3719 MB/s Jan 23 23:56:05.993040 kernel: raid6: int64x2 gen() 3588 MB/s Jan 23 23:56:06.011105 kernel: raid6: int64x1 gen() 2765 MB/s Jan 23 23:56:06.011139 kernel: raid6: using algorithm neonx8 gen() 6637 MB/s Jan 23 23:56:06.030070 kernel: raid6: .... xor() 4869 MB/s, rmw enabled Jan 23 23:56:06.030118 kernel: raid6: using neon recovery algorithm Jan 23 23:56:06.038041 kernel: xor: measuring software checksum speed Jan 23 23:56:06.040362 kernel: 8regs : 10278 MB/sec Jan 23 23:56:06.040400 kernel: 32regs : 11916 MB/sec Jan 23 23:56:06.041700 kernel: arm64_neon : 9061 MB/sec Jan 23 23:56:06.041743 kernel: xor: using function: 32regs (11916 MB/sec) Jan 23 23:56:06.126052 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:56:06.145526 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:06.159430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:06.200133 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 23 23:56:06.208094 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:06.230257 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:56:06.256562 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jan 23 23:56:06.315512 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:06.335282 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:56:06.447209 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:06.462372 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:56:06.509870 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:06.515500 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:06.515706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:06.528669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:56:06.543293 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:56:06.577792 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:06.650408 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:56:06.650487 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:56:06.659328 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:56:06.659677 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:56:06.661141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:06.661271 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:06.669508 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:06.672166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:06.675229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.678056 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.701821 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:38:00:0d:ef:45 Jan 23 23:56:06.703193 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:56:06.703226 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:56:06.700430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:06.716049 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:56:06.729081 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:56:06.729142 kernel: GPT:9289727 != 33554431 Jan 23 23:56:06.729169 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:56:06.729205 kernel: GPT:9289727 != 33554431 Jan 23 23:56:06.729231 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:56:06.729054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:06.731897 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:06.741514 (udev-worker)[532]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:06.745037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:06.788143 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:06.847085 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (538) Jan 23 23:56:06.884041 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (530) Jan 23 23:56:06.928951 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:56:06.978383 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:56:06.996953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:56:07.013578 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:56:07.013734 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:56:07.035267 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:56:07.050825 disk-uuid[664]: Primary Header is updated. Jan 23 23:56:07.050825 disk-uuid[664]: Secondary Entries is updated. Jan 23 23:56:07.050825 disk-uuid[664]: Secondary Header is updated. Jan 23 23:56:07.064026 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:07.070783 kernel: GPT:disk_guids don't match. Jan 23 23:56:07.070841 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:56:07.071923 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:07.082031 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:08.082065 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:08.084484 disk-uuid[665]: The operation has completed successfully. Jan 23 23:56:08.275270 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:56:08.278155 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:56:08.332476 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:56:08.342884 sh[1011]: Success Jan 23 23:56:08.370079 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:56:08.498424 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:56:08.505222 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:56:08.511078 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:56:08.560630 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:56:08.560693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:08.560720 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:56:08.562570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:56:08.563969 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:56:08.596041 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:56:08.611501 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:56:08.616282 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:56:08.629249 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:56:08.638332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:56:08.664117 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:08.664200 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:08.665770 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:56:08.690038 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:56:08.704656 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:56:08.708247 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:08.718533 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:56:08.730958 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:56:08.824041 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:08.847465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:08.911422 systemd-networkd[1212]: lo: Link UP Jan 23 23:56:08.911442 systemd-networkd[1212]: lo: Gained carrier Jan 23 23:56:08.916788 systemd-networkd[1212]: Enumeration completed Jan 23 23:56:08.918762 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:08.920650 systemd-networkd[1212]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:08.920657 systemd-networkd[1212]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:08.921396 systemd[1]: Reached target network.target - Network. Jan 23 23:56:08.936967 systemd-networkd[1212]: eth0: Link UP Jan 23 23:56:08.936979 systemd-networkd[1212]: eth0: Gained carrier Jan 23 23:56:08.936997 systemd-networkd[1212]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:08.953136 systemd-networkd[1212]: eth0: DHCPv4 address 172.31.30.184/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:56:08.984497 ignition[1130]: Ignition 2.19.0 Jan 23 23:56:08.985050 ignition[1130]: Stage: fetch-offline Jan 23 23:56:08.987232 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:08.991253 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:08.987258 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:08.987854 ignition[1130]: Ignition finished successfully Jan 23 23:56:09.009442 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:56:09.037817 ignition[1226]: Ignition 2.19.0 Jan 23 23:56:09.037846 ignition[1226]: Stage: fetch Jan 23 23:56:09.038540 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.038566 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:09.038724 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:09.057901 ignition[1226]: PUT result: OK Jan 23 23:56:09.060866 ignition[1226]: parsed url from cmdline: "" Jan 23 23:56:09.060881 ignition[1226]: no config URL provided Jan 23 23:56:09.060898 ignition[1226]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:56:09.060923 ignition[1226]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:56:09.060954 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:09.070909 ignition[1226]: PUT result: OK Jan 23 23:56:09.073349 ignition[1226]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:56:09.077442 ignition[1226]: GET result: OK Jan 23 23:56:09.078841 ignition[1226]: parsing config with SHA512: c40fce1e4d3d24627b32a32ffecdf584a6026194a49bd6be158e2d1d2b71f01b289276db8de278197a5959df5e223d6a1a290692e82f6e4b80c99e13c2486a11 Jan 23 23:56:09.088603 unknown[1226]: fetched base config from "system" Jan 23 23:56:09.088658 unknown[1226]: fetched base config from "system" Jan 23 23:56:09.088677 unknown[1226]: fetched user config from "aws" Jan 23 23:56:09.095367 ignition[1226]: fetch: fetch complete Jan 23 23:56:09.095862 ignition[1226]: fetch: fetch passed Jan 23 23:56:09.095969 ignition[1226]: Ignition finished successfully Jan 23 23:56:09.103369 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:56:09.114303 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:56:09.142099 ignition[1233]: Ignition 2.19.0 Jan 23 23:56:09.142605 ignition[1233]: Stage: kargs Jan 23 23:56:09.143295 ignition[1233]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.143320 ignition[1233]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:09.143474 ignition[1233]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:09.152952 ignition[1233]: PUT result: OK Jan 23 23:56:09.157645 ignition[1233]: kargs: kargs passed Jan 23 23:56:09.157753 ignition[1233]: Ignition finished successfully Jan 23 23:56:09.163411 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:56:09.173296 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:56:09.201856 ignition[1239]: Ignition 2.19.0 Jan 23 23:56:09.202397 ignition[1239]: Stage: disks Jan 23 23:56:09.203427 ignition[1239]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.203452 ignition[1239]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:09.203901 ignition[1239]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:09.212693 ignition[1239]: PUT result: OK Jan 23 23:56:09.217364 ignition[1239]: disks: disks passed Jan 23 23:56:09.217465 ignition[1239]: Ignition finished successfully Jan 23 23:56:09.220921 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:56:09.227574 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:09.232787 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:56:09.235571 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:09.238157 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:09.240904 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:09.251350 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:56:09.302672 systemd-fsck[1247]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:56:09.306718 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:56:09.320330 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:56:09.399063 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:56:09.400606 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:56:09.404814 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:56:09.426246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:09.435407 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:56:09.442737 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:56:09.443168 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:56:09.443224 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:09.460814 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1266) Jan 23 23:56:09.466408 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:09.466456 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:09.468498 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:56:09.476842 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:56:09.483224 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:56:09.495392 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:56:09.497960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:09.588298 initrd-setup-root[1290]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:56:09.598260 initrd-setup-root[1297]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:56:09.608449 initrd-setup-root[1304]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:56:09.617043 initrd-setup-root[1311]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:56:09.783038 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:09.796210 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:56:09.812442 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:56:09.830833 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:56:09.836737 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:09.868781 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:56:09.885727 ignition[1379]: INFO : Ignition 2.19.0 Jan 23 23:56:09.889504 ignition[1379]: INFO : Stage: mount Jan 23 23:56:09.889504 ignition[1379]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:09.889504 ignition[1379]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:09.889504 ignition[1379]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:09.899995 ignition[1379]: INFO : PUT result: OK Jan 23 23:56:09.906869 ignition[1379]: INFO : mount: mount passed Jan 23 23:56:09.908720 ignition[1379]: INFO : Ignition finished successfully Jan 23 23:56:09.913162 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:56:09.924197 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:56:09.949466 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:09.983943 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1390) Jan 23 23:56:09.984059 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:09.984091 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:09.987023 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:56:09.995045 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:56:09.995980 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:10.036735 ignition[1407]: INFO : Ignition 2.19.0 Jan 23 23:56:10.036735 ignition[1407]: INFO : Stage: files Jan 23 23:56:10.042552 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:10.042552 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:10.042552 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:10.050714 ignition[1407]: INFO : PUT result: OK Jan 23 23:56:10.055758 ignition[1407]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:56:10.058772 ignition[1407]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:56:10.058772 ignition[1407]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:56:10.070489 ignition[1407]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:56:10.074053 ignition[1407]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:56:10.077454 unknown[1407]: wrote ssh authorized keys file for user: core Jan 23 23:56:10.080088 ignition[1407]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:56:10.084175 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:56:10.088588 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:56:10.530176 systemd-networkd[1212]: eth0: Gained IPv6LL Jan 23 23:56:11.205148 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:56:11.403448 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:56:11.408269 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:56:11.408269 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 23:56:11.499247 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:56:11.644693 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:56:11.648961 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:56:11.934684 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 23:56:12.302634 ignition[1407]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:56:12.302634 ignition[1407]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:12.310895 ignition[1407]: INFO : files: files passed Jan 23 23:56:12.310895 ignition[1407]: INFO : Ignition finished successfully Jan 23 23:56:12.346844 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:56:12.356264 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:56:12.369635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:56:12.378597 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:56:12.382204 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:56:12.414882 initrd-setup-root-after-ignition[1435]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:12.414882 initrd-setup-root-after-ignition[1435]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:12.423595 initrd-setup-root-after-ignition[1439]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:12.431084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:12.437725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:56:12.450299 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:56:12.505258 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:56:12.505650 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:56:12.514532 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:56:12.516897 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:56:12.519251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:56:12.534319 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:56:12.565197 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:12.585320 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:56:12.610123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:12.613180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:12.617355 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:56:12.626653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:56:12.626937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:12.635972 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:56:12.638659 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:56:12.641138 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:56:12.646644 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:12.653723 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:12.656511 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:56:12.660999 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:12.665431 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:56:12.667987 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:56:12.672507 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:56:12.678513 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:56:12.679304 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:12.686652 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:12.691823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:12.696954 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:56:12.699184 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:12.699557 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:56:12.699794 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:12.707857 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:56:12.708158 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:12.711415 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:56:12.711640 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:56:12.732467 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:56:12.736569 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:56:12.740172 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:12.751501 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:56:12.762188 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:56:12.762720 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:12.767847 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:56:12.768147 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:12.792476 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:56:12.797104 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:56:12.822103 ignition[1459]: INFO : Ignition 2.19.0 Jan 23 23:56:12.822103 ignition[1459]: INFO : Stage: umount Jan 23 23:56:12.828158 ignition[1459]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:12.828158 ignition[1459]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:12.828158 ignition[1459]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:12.832896 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:56:12.839388 ignition[1459]: INFO : PUT result: OK Jan 23 23:56:12.849062 ignition[1459]: INFO : umount: umount passed Jan 23 23:56:12.851283 ignition[1459]: INFO : Ignition finished successfully Jan 23 23:56:12.857272 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:56:12.859078 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:56:12.865851 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:56:12.866106 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:56:12.873847 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:56:12.874329 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:56:12.881058 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:56:12.881190 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:56:12.883712 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:56:12.883824 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:56:12.886440 systemd[1]: Stopped target network.target - Network. Jan 23 23:56:12.888560 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:56:12.888672 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:12.891587 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:56:12.893727 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:56:12.898489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:12.901383 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:56:12.903550 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:56:12.907828 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:56:12.908065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:12.932748 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:56:12.932841 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:12.935261 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:56:12.935365 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:56:12.938299 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:56:12.938413 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:12.942324 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:56:12.942428 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:12.947215 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:56:12.952353 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:56:12.954081 systemd-networkd[1212]: eth0: DHCPv6 lease lost Jan 23 23:56:12.960916 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:56:12.961446 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:56:12.967674 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:56:12.967807 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:12.999264 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:56:13.003878 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:56:13.004603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:13.014848 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:13.018300 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:56:13.018524 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:56:13.043431 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:56:13.043822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:13.051329 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:56:13.051623 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:13.059658 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:56:13.061160 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:13.074910 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:56:13.075290 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:13.079798 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:56:13.080046 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:56:13.087655 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:56:13.087781 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:13.091516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:56:13.091600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:13.092289 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:56:13.092387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:13.093148 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:56:13.093252 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:13.093866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:13.093953 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:13.111433 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:56:13.123793 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:56:13.123933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:13.145562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:13.145678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:13.174409 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:56:13.176218 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:56:13.184798 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:56:13.195513 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:56:13.217618 systemd[1]: Switching root. Jan 23 23:56:13.257721 systemd-journald[251]: Journal stopped Jan 23 23:56:15.175808 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 23 23:56:15.175949 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:56:15.175993 kernel: SELinux: policy capability open_perms=1 Jan 23 23:56:15.178292 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:56:15.178352 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:56:15.178386 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:56:15.178419 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:56:15.178451 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:56:15.178481 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:56:15.178512 kernel: audit: type=1403 audit(1769212573.480:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:56:15.178547 systemd[1]: Successfully loaded SELinux policy in 52.271ms. Jan 23 23:56:15.178599 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.512ms. Jan 23 23:56:15.178636 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:56:15.178673 systemd[1]: Detected virtualization amazon. Jan 23 23:56:15.178704 systemd[1]: Detected architecture arm64. Jan 23 23:56:15.178735 systemd[1]: Detected first boot. Jan 23 23:56:15.178767 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:56:15.178800 zram_generator::config[1501]: No configuration found. Jan 23 23:56:15.178834 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:56:15.178866 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:56:15.178897 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:56:15.178932 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:56:15.178965 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:56:15.181083 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:56:15.181146 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:56:15.181180 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:56:15.181213 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:56:15.181252 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:56:15.181283 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:56:15.181316 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:56:15.181358 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:15.181389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:15.181419 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:56:15.181451 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:56:15.181485 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:56:15.181515 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:56:15.181546 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:56:15.181576 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:15.181605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:56:15.181640 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:56:15.181670 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:56:15.181701 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:56:15.181731 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:15.181764 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:56:15.181794 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:56:15.181835 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:56:15.181865 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:56:15.181903 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:56:15.181941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:15.181973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:15.184178 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:15.184234 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:56:15.184268 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:56:15.184300 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:56:15.184332 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:56:15.184364 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:56:15.184402 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:56:15.184434 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:56:15.184465 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:56:15.184495 systemd[1]: Reached target machines.target - Containers. Jan 23 23:56:15.184617 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:56:15.185111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:15.185148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:56:15.185178 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:56:15.185229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:15.185260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:56:15.185291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:15.185323 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:56:15.185357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:15.185387 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:56:15.185420 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:56:15.185450 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:56:15.185484 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:56:15.185514 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:56:15.185546 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:56:15.185574 kernel: fuse: init (API version 7.39) Jan 23 23:56:15.185606 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:56:15.185636 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:56:15.185666 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:56:15.185696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:56:15.185728 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:56:15.185757 systemd[1]: Stopped verity-setup.service. Jan 23 23:56:15.185791 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:56:15.185821 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:56:15.185853 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:56:15.185883 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:56:15.185912 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:56:15.185947 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:56:15.185978 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:15.189177 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:56:15.189237 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:56:15.189268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:15.189299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:15.189329 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:15.189363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:15.189402 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:56:15.189432 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:56:15.189462 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:56:15.189492 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:56:15.189522 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:56:15.189555 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:56:15.189592 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:56:15.189623 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:56:15.189653 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:15.189729 systemd-journald[1583]: Collecting audit messages is disabled. Jan 23 23:56:15.189788 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:56:15.189823 kernel: loop: module loaded Jan 23 23:56:15.189854 kernel: ACPI: bus type drm_connector registered Jan 23 23:56:15.189883 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:56:15.189913 systemd-journald[1583]: Journal started Jan 23 23:56:15.189964 systemd-journald[1583]: Runtime Journal (/run/log/journal/ec2ba0574a90fd77e10a8b38051f68b2) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:56:14.514953 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:56:14.540938 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:56:14.541699 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:56:15.211056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:56:15.211140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:15.223097 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:56:15.229986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:56:15.239542 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:56:15.265171 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:56:15.265262 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:56:15.271405 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:56:15.273500 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:56:15.276619 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:15.276918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:15.280102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:15.283018 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:56:15.286715 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:56:15.292131 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:56:15.369079 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:56:15.375147 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:56:15.384078 kernel: loop0: detected capacity change from 0 to 207008 Jan 23 23:56:15.392729 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:56:15.406745 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:56:15.421283 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:56:15.424376 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:56:15.429165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:15.435885 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:56:15.468216 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:15.481443 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:56:15.518046 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:56:15.529309 systemd-journald[1583]: Time spent on flushing to /var/log/journal/ec2ba0574a90fd77e10a8b38051f68b2 is 113.203ms for 912 entries. Jan 23 23:56:15.529309 systemd-journald[1583]: System Journal (/var/log/journal/ec2ba0574a90fd77e10a8b38051f68b2) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:56:15.673290 systemd-journald[1583]: Received client request to flush runtime journal. Jan 23 23:56:15.673392 kernel: loop1: detected capacity change from 0 to 114432 Jan 23 23:56:15.673428 kernel: loop2: detected capacity change from 0 to 114328 Jan 23 23:56:15.539471 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:56:15.541916 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:56:15.568095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:15.581972 udevadm[1639]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:56:15.679725 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:56:15.687286 kernel: loop3: detected capacity change from 0 to 52536 Jan 23 23:56:15.688212 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:56:15.705422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:56:15.793980 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Jan 23 23:56:15.794071 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Jan 23 23:56:15.799183 kernel: loop4: detected capacity change from 0 to 207008 Jan 23 23:56:15.816578 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:15.844724 kernel: loop5: detected capacity change from 0 to 114432 Jan 23 23:56:15.872046 kernel: loop6: detected capacity change from 0 to 114328 Jan 23 23:56:15.900050 kernel: loop7: detected capacity change from 0 to 52536 Jan 23 23:56:15.924504 (sd-merge)[1654]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:56:15.925521 (sd-merge)[1654]: Merged extensions into '/usr'. Jan 23 23:56:15.941443 systemd[1]: Reloading requested from client PID 1608 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:56:15.941655 systemd[1]: Reloading... Jan 23 23:56:16.161219 zram_generator::config[1680]: No configuration found. Jan 23 23:56:16.270055 ldconfig[1601]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:56:16.462803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:16.580196 systemd[1]: Reloading finished in 636 ms. Jan 23 23:56:16.624057 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:56:16.627516 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:56:16.630902 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:56:16.648365 systemd[1]: Starting ensure-sysext.service... Jan 23 23:56:16.659638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:16.668379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:16.693290 systemd[1]: Reloading requested from client PID 1734 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:56:16.693455 systemd[1]: Reloading... Jan 23 23:56:16.712479 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:56:16.713191 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:56:16.719932 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:56:16.722766 systemd-tmpfiles[1735]: ACLs are not supported, ignoring. Jan 23 23:56:16.723039 systemd-tmpfiles[1735]: ACLs are not supported, ignoring. Jan 23 23:56:16.744542 systemd-tmpfiles[1735]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:16.744837 systemd-tmpfiles[1735]: Skipping /boot Jan 23 23:56:16.761268 systemd-udevd[1736]: Using default interface naming scheme 'v255'. Jan 23 23:56:16.776300 systemd-tmpfiles[1735]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:16.776320 systemd-tmpfiles[1735]: Skipping /boot Jan 23 23:56:16.881115 zram_generator::config[1768]: No configuration found. Jan 23 23:56:17.038719 (udev-worker)[1791]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:17.203534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:17.366240 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:56:17.367837 systemd[1]: Reloading finished in 673 ms. Jan 23 23:56:17.413657 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1833) Jan 23 23:56:17.417260 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:17.446207 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:17.512117 systemd[1]: Finished ensure-sysext.service. Jan 23 23:56:17.532347 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:17.539543 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:56:17.542664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:17.547357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:17.553735 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:56:17.562346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:17.568447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:17.571263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:17.581334 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:56:17.588905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:17.603912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:17.606424 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:56:17.614311 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:56:17.621722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:17.688116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:17.690326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:17.730377 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:56:17.793845 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:56:17.794682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:56:17.799087 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:56:17.825157 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:17.825481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:17.835732 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:56:17.839693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:17.841117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:17.847835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:56:17.847975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:56:17.848930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:56:17.866753 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:56:17.887456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:56:17.909973 augenrules[1966]: No rules Jan 23 23:56:17.915164 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:17.925326 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:56:17.935400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:56:17.941427 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:56:17.955297 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:56:17.961335 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:56:17.964251 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:56:18.028953 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:18.032734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:56:18.080997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:18.096629 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:56:18.100187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:18.113496 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:56:18.134664 systemd-networkd[1917]: lo: Link UP Jan 23 23:56:18.134673 systemd-networkd[1917]: lo: Gained carrier Jan 23 23:56:18.139586 systemd-networkd[1917]: Enumeration completed Jan 23 23:56:18.139784 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:18.144738 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:18.144762 systemd-networkd[1917]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:18.150791 lvm[1988]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:18.153350 systemd-networkd[1917]: eth0: Link UP Jan 23 23:56:18.153749 systemd-networkd[1917]: eth0: Gained carrier Jan 23 23:56:18.153783 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:18.154298 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:56:18.176122 systemd-networkd[1917]: eth0: DHCPv4 address 172.31.30.184/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:56:18.201881 systemd-resolved[1920]: Positive Trust Anchors: Jan 23 23:56:18.201921 systemd-resolved[1920]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:18.201985 systemd-resolved[1920]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:18.208098 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:56:18.220111 systemd-resolved[1920]: Defaulting to hostname 'linux'. Jan 23 23:56:18.223602 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:18.226355 systemd[1]: Reached target network.target - Network. Jan 23 23:56:18.228402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:18.231872 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:18.234500 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:56:18.237397 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:56:18.240531 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:56:18.243930 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:56:18.246848 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:56:18.249735 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:56:18.249792 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:18.251922 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:18.255611 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:56:18.261236 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:56:18.275329 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:56:18.278792 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:56:18.281750 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:18.284017 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:18.286212 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:18.286270 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:18.288987 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:56:18.303157 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:56:18.309342 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:56:18.316287 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:56:18.328404 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:56:18.331198 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:56:18.340321 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:56:18.349357 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:56:18.358246 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:56:18.365525 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:56:18.371263 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:56:18.384339 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:56:18.397100 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:56:18.401138 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:56:18.403310 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:56:18.407063 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:56:18.415921 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:56:18.466649 jq[1997]: false Jan 23 23:56:18.463289 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:56:18.466155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:56:18.483279 dbus-daemon[1996]: [system] SELinux support is enabled Jan 23 23:56:18.485370 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:56:18.505048 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:56:18.505102 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:56:18.508181 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:56:18.537689 jq[2009]: true Jan 23 23:56:18.508217 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:56:18.515867 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:56:18.518126 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:56:18.546518 dbus-daemon[1996]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1917 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:56:18.562931 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:56:18.580264 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: ---------------------------------------------------- Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: corporation. Support and training for ntp-4 are Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: available at https://www.nwtime.org/support Jan 23 23:56:18.582547 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: ---------------------------------------------------- Jan 23 23:56:18.580318 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:56:18.580340 ntpd[2000]: ---------------------------------------------------- Jan 23 23:56:18.580359 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:56:18.580379 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:56:18.580397 ntpd[2000]: corporation. Support and training for ntp-4 are Jan 23 23:56:18.580416 ntpd[2000]: available at https://www.nwtime.org/support Jan 23 23:56:18.580434 ntpd[2000]: ---------------------------------------------------- Jan 23 23:56:18.589703 ntpd[2000]: proto: precision = 0.108 usec (-23) Jan 23 23:56:18.594265 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: proto: precision = 0.108 usec (-23) Jan 23 23:56:18.594868 ntpd[2000]: basedate set to 2026-01-11 Jan 23 23:56:18.596162 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: basedate set to 2026-01-11 Jan 23 23:56:18.596162 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: gps base set to 2026-01-11 (week 2401) Jan 23 23:56:18.594905 ntpd[2000]: gps base set to 2026-01-11 (week 2401) Jan 23 23:56:18.598859 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:56:18.607740 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Listen normally on 3 eth0 172.31.30.184:123 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Listen normally on 4 lo [::1]:123 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: bind(21) AF_INET6 fe80::438:ff:fe0d:ef45%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: unable to create socket on eth0 (5) for fe80::438:ff:fe0d:ef45%2#123 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: failed to init interface for address fe80::438:ff:fe0d:ef45%2 Jan 23 23:56:18.611173 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Jan 23 23:56:18.607820 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:56:18.608132 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:56:18.608197 ntpd[2000]: Listen normally on 3 eth0 172.31.30.184:123 Jan 23 23:56:18.608267 ntpd[2000]: Listen normally on 4 lo [::1]:123 Jan 23 23:56:18.608341 ntpd[2000]: bind(21) AF_INET6 fe80::438:ff:fe0d:ef45%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:56:18.608379 ntpd[2000]: unable to create socket on eth0 (5) for fe80::438:ff:fe0d:ef45%2#123 Jan 23 23:56:18.608407 ntpd[2000]: failed to init interface for address fe80::438:ff:fe0d:ef45%2 Jan 23 23:56:18.608461 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Jan 23 23:56:18.632087 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:18.632700 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:18.632700 ntpd[2000]: 23 Jan 23:56:18 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:18.632147 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:18.633845 jq[2028]: true Jan 23 23:56:18.637515 update_engine[2008]: I20260123 23:56:18.628989 2008 main.cc:92] Flatcar Update Engine starting Jan 23 23:56:18.658629 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:56:18.661152 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:56:18.674352 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:56:18.683821 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:56:18.689846 tar[2032]: linux-arm64/LICENSE Jan 23 23:56:18.689846 tar[2032]: linux-arm64/helm Jan 23 23:56:18.694367 update_engine[2008]: I20260123 23:56:18.689769 2008 update_check_scheduler.cc:74] Next update check in 4m47s Jan 23 23:56:18.694767 (ntainerd)[2034]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:56:18.706088 extend-filesystems[1998]: Found loop4 Jan 23 23:56:18.706088 extend-filesystems[1998]: Found loop5 Jan 23 23:56:18.706088 extend-filesystems[1998]: Found loop6 Jan 23 23:56:18.706088 extend-filesystems[1998]: Found loop7 Jan 23 23:56:18.706088 extend-filesystems[1998]: Found nvme0n1 Jan 23 23:56:18.706088 extend-filesystems[1998]: Found nvme0n1p1 Jan 23 23:56:18.706088 extend-filesystems[1998]: Found nvme0n1p2 Jan 23 23:56:18.717061 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:56:18.733442 extend-filesystems[1998]: Found nvme0n1p3 Jan 23 23:56:18.733442 extend-filesystems[1998]: Found usr Jan 23 23:56:18.733442 extend-filesystems[1998]: Found nvme0n1p4 Jan 23 23:56:18.733442 extend-filesystems[1998]: Found nvme0n1p6 Jan 23 23:56:18.733442 extend-filesystems[1998]: Found nvme0n1p7 Jan 23 23:56:18.733442 extend-filesystems[1998]: Found nvme0n1p9 Jan 23 23:56:18.733442 extend-filesystems[1998]: Checking size of /dev/nvme0n1p9 Jan 23 23:56:18.788766 coreos-metadata[1995]: Jan 23 23:56:18.788 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:56:18.793717 coreos-metadata[1995]: Jan 23 23:56:18.793 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:56:18.795283 coreos-metadata[1995]: Jan 23 23:56:18.795 INFO Fetch successful Jan 23 23:56:18.795283 coreos-metadata[1995]: Jan 23 23:56:18.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:56:18.799333 coreos-metadata[1995]: Jan 23 23:56:18.799 INFO Fetch successful Jan 23 23:56:18.799333 coreos-metadata[1995]: Jan 23 23:56:18.799 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:56:18.804429 coreos-metadata[1995]: Jan 23 23:56:18.804 INFO Fetch successful Jan 23 23:56:18.804429 coreos-metadata[1995]: Jan 23 23:56:18.804 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:56:18.808348 coreos-metadata[1995]: Jan 23 23:56:18.808 INFO Fetch successful Jan 23 23:56:18.808348 coreos-metadata[1995]: Jan 23 23:56:18.808 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:56:18.811055 coreos-metadata[1995]: Jan 23 23:56:18.810 INFO Fetch failed with 404: resource not found Jan 23 23:56:18.811607 coreos-metadata[1995]: Jan 23 23:56:18.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:56:18.813064 coreos-metadata[1995]: Jan 23 23:56:18.812 INFO Fetch successful Jan 23 23:56:18.814082 coreos-metadata[1995]: Jan 23 23:56:18.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:56:18.820756 coreos-metadata[1995]: Jan 23 23:56:18.820 INFO Fetch successful Jan 23 23:56:18.826883 extend-filesystems[1998]: Resized partition /dev/nvme0n1p9 Jan 23 23:56:18.829640 coreos-metadata[1995]: Jan 23 23:56:18.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:56:18.829640 coreos-metadata[1995]: Jan 23 23:56:18.826 INFO Fetch successful Jan 23 23:56:18.829640 coreos-metadata[1995]: Jan 23 23:56:18.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:56:18.829640 coreos-metadata[1995]: Jan 23 23:56:18.829 INFO Fetch successful Jan 23 23:56:18.829640 coreos-metadata[1995]: Jan 23 23:56:18.829 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:56:18.830861 coreos-metadata[1995]: Jan 23 23:56:18.830 INFO Fetch successful Jan 23 23:56:18.838814 extend-filesystems[2058]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:56:18.849046 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:56:19.052798 bash[2072]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:56:19.094187 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1791) Jan 23 23:56:19.094272 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:56:19.130135 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:56:19.152041 extend-filesystems[2058]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:56:19.152041 extend-filesystems[2058]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:56:19.152041 extend-filesystems[2058]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:56:19.163140 extend-filesystems[1998]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:56:19.172350 systemd[1]: Starting sshkeys.service... Jan 23 23:56:19.174734 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:56:19.175715 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:56:19.195189 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:56:19.198882 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:56:19.204085 systemd-logind[2007]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:56:19.204168 systemd-logind[2007]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:56:19.208455 systemd-logind[2007]: New seat seat0. Jan 23 23:56:19.230736 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:56:19.231619 dbus-daemon[1996]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2035 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:56:19.238172 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:56:19.242194 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:56:19.254504 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:56:19.290742 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:56:19.301545 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:56:19.329223 polkitd[2101]: Started polkitd version 121 Jan 23 23:56:19.349960 polkitd[2101]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:56:19.350780 polkitd[2101]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:56:19.352754 polkitd[2101]: Finished loading, compiling and executing 2 rules Jan 23 23:56:19.358624 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:56:19.358933 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:56:19.364131 systemd-networkd[1917]: eth0: Gained IPv6LL Jan 23 23:56:19.369937 polkitd[2101]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:56:19.377914 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:56:19.385947 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:56:19.394264 locksmithd[2045]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:56:19.399757 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:56:19.412568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:19.437702 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:56:19.498074 systemd-hostnamed[2035]: Hostname set to (transient) Jan 23 23:56:19.506176 systemd-resolved[1920]: System hostname changed to 'ip-172-31-30-184'. Jan 23 23:56:19.528630 containerd[2034]: time="2026-01-23T23:56:19.528477418Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:56:19.603841 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:56:19.644789 coreos-metadata[2102]: Jan 23 23:56:19.644 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:56:19.653077 coreos-metadata[2102]: Jan 23 23:56:19.649 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:56:19.654117 coreos-metadata[2102]: Jan 23 23:56:19.653 INFO Fetch successful Jan 23 23:56:19.654117 coreos-metadata[2102]: Jan 23 23:56:19.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:56:19.656628 coreos-metadata[2102]: Jan 23 23:56:19.655 INFO Fetch successful Jan 23 23:56:19.664249 unknown[2102]: wrote ssh authorized keys file for user: core Jan 23 23:56:19.682029 amazon-ssm-agent[2125]: Initializing new seelog logger Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: New Seelog Logger Creation Complete Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 processing appconfig overrides Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 processing appconfig overrides Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.689054 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 processing appconfig overrides Jan 23 23:56:19.694755 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO Proxy environment variables: Jan 23 23:56:19.712438 containerd[2034]: time="2026-01-23T23:56:19.712070687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:19.712998 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.715032 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:19.715032 amazon-ssm-agent[2125]: 2026/01/23 23:56:19 processing appconfig overrides Jan 23 23:56:19.719943 containerd[2034]: time="2026-01-23T23:56:19.719822651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:19.720546 containerd[2034]: time="2026-01-23T23:56:19.720204827Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:56:19.720546 containerd[2034]: time="2026-01-23T23:56:19.720368675Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:56:19.723446 containerd[2034]: time="2026-01-23T23:56:19.723079727Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:56:19.723446 containerd[2034]: time="2026-01-23T23:56:19.723143015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:19.723446 containerd[2034]: time="2026-01-23T23:56:19.723280895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:19.723446 containerd[2034]: time="2026-01-23T23:56:19.723313163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:19.726083 containerd[2034]: time="2026-01-23T23:56:19.724595039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:19.726083 containerd[2034]: time="2026-01-23T23:56:19.724649579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:19.726083 containerd[2034]: time="2026-01-23T23:56:19.724684055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:19.726083 containerd[2034]: time="2026-01-23T23:56:19.724709867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:19.730245 containerd[2034]: time="2026-01-23T23:56:19.730128911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:19.733394 containerd[2034]: time="2026-01-23T23:56:19.732485099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:19.733394 containerd[2034]: time="2026-01-23T23:56:19.732761603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:19.733394 containerd[2034]: time="2026-01-23T23:56:19.732794411Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:56:19.733394 containerd[2034]: time="2026-01-23T23:56:19.732966371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:56:19.733394 containerd[2034]: time="2026-01-23T23:56:19.733092707Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:56:19.749068 update-ssh-keys[2177]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:56:19.746776 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:56:19.760112 systemd[1]: Finished sshkeys.service. Jan 23 23:56:19.771526 containerd[2034]: time="2026-01-23T23:56:19.770201171Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:56:19.771526 containerd[2034]: time="2026-01-23T23:56:19.770315699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:56:19.771526 containerd[2034]: time="2026-01-23T23:56:19.770353547Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:56:19.771526 containerd[2034]: time="2026-01-23T23:56:19.770392955Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:56:19.771526 containerd[2034]: time="2026-01-23T23:56:19.770426819Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:56:19.771526 containerd[2034]: time="2026-01-23T23:56:19.770696447Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:56:19.773360 containerd[2034]: time="2026-01-23T23:56:19.771576035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:56:19.773360 containerd[2034]: time="2026-01-23T23:56:19.771855935Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:56:19.773360 containerd[2034]: time="2026-01-23T23:56:19.771898967Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:56:19.773360 containerd[2034]: time="2026-01-23T23:56:19.771930959Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:56:19.773360 containerd[2034]: time="2026-01-23T23:56:19.771965711Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.771995555Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776127467Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776182031Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776230703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776279831Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776313119Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776344127Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776387603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776421683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776474063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776529503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776577227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776610311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.778469 containerd[2034]: time="2026-01-23T23:56:19.776639255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776669459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776705111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776750111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776779631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776809727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776840987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776875727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776922779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776951855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.776979179Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.777238163Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.777277643Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:56:19.779452 containerd[2034]: time="2026-01-23T23:56:19.777304379Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:56:19.779987 containerd[2034]: time="2026-01-23T23:56:19.777335819Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:56:19.779987 containerd[2034]: time="2026-01-23T23:56:19.777360155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.779987 containerd[2034]: time="2026-01-23T23:56:19.777389795Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:56:19.779987 containerd[2034]: time="2026-01-23T23:56:19.777427127Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:56:19.779987 containerd[2034]: time="2026-01-23T23:56:19.777454271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:56:19.789679 containerd[2034]: time="2026-01-23T23:56:19.777951059Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:56:19.789679 containerd[2034]: time="2026-01-23T23:56:19.786152891Z" level=info msg="Connect containerd service" Jan 23 23:56:19.789679 containerd[2034]: time="2026-01-23T23:56:19.786244415Z" level=info msg="using legacy CRI server" Jan 23 23:56:19.789679 containerd[2034]: time="2026-01-23T23:56:19.786264047Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:56:19.789679 containerd[2034]: time="2026-01-23T23:56:19.786446891Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.795264143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.795609287Z" level=info msg="Start subscribing containerd event" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.795695543Z" level=info msg="Start recovering state" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.795813407Z" level=info msg="Start event monitor" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.795837419Z" level=info msg="Start snapshots syncer" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.795860147Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.795882743Z" level=info msg="Start streaming server" Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.796132487Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:56:19.799244 containerd[2034]: time="2026-01-23T23:56:19.796221491Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:56:19.796450 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:56:19.801702 containerd[2034]: time="2026-01-23T23:56:19.801641004Z" level=info msg="containerd successfully booted in 0.294364s" Jan 23 23:56:19.808678 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO https_proxy: Jan 23 23:56:19.912087 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO http_proxy: Jan 23 23:56:20.011301 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO no_proxy: Jan 23 23:56:20.110723 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:56:20.211859 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:56:20.270216 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:56:20.312469 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO Agent will take identity from EC2 Jan 23 23:56:20.410481 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:20.512099 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:20.609459 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:20.710109 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:56:20.811615 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:56:20.911900 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:56:20.966315 tar[2032]: linux-arm64/README.md Jan 23 23:56:21.012604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:56:21.016675 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:56:21.118156 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [Registrar] Starting registrar module Jan 23 23:56:21.165906 sshd_keygen[2036]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:56:21.219030 amazon-ssm-agent[2125]: 2026-01-23 23:56:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:56:21.227122 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:56:21.242646 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:56:21.256141 systemd[1]: Started sshd@0-172.31.30.184:22-4.153.228.146:47908.service - OpenSSH per-connection server daemon (4.153.228.146:47908). Jan 23 23:56:21.300289 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:56:21.300893 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:56:21.311735 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:56:21.344816 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:56:21.358188 amazon-ssm-agent[2125]: 2026-01-23 23:56:21 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:56:21.358815 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:56:21.367650 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:56:21.370905 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:56:21.422244 amazon-ssm-agent[2125]: 2026-01-23 23:56:21 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:56:21.422244 amazon-ssm-agent[2125]: 2026-01-23 23:56:21 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:56:21.422244 amazon-ssm-agent[2125]: 2026-01-23 23:56:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:56:21.458659 amazon-ssm-agent[2125]: 2026-01-23 23:56:21 INFO [CredentialRefresher] Next credential rotation will be in 30.2749924074 minutes Jan 23 23:56:21.581407 ntpd[2000]: Listen normally on 6 eth0 [fe80::438:ff:fe0d:ef45%2]:123 Jan 23 23:56:21.582662 ntpd[2000]: 23 Jan 23:56:21 ntpd[2000]: Listen normally on 6 eth0 [fe80::438:ff:fe0d:ef45%2]:123 Jan 23 23:56:21.830943 sshd[2233]: Accepted publickey for core from 4.153.228.146 port 47908 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:21.833990 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:21.854935 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:56:21.868144 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:56:21.877875 systemd-logind[2007]: New session 1 of user core. Jan 23 23:56:21.899125 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:56:21.911687 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:56:21.930123 (systemd)[2244]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:56:22.135374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:22.144161 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:56:22.156933 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:22.173860 systemd[2244]: Queued start job for default target default.target. Jan 23 23:56:22.179211 systemd[2244]: Created slice app.slice - User Application Slice. Jan 23 23:56:22.179280 systemd[2244]: Reached target paths.target - Paths. Jan 23 23:56:22.179313 systemd[2244]: Reached target timers.target - Timers. Jan 23 23:56:22.181956 systemd[2244]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:56:22.212855 systemd[2244]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:56:22.213121 systemd[2244]: Reached target sockets.target - Sockets. Jan 23 23:56:22.213154 systemd[2244]: Reached target basic.target - Basic System. Jan 23 23:56:22.213250 systemd[2244]: Reached target default.target - Main User Target. Jan 23 23:56:22.213315 systemd[2244]: Startup finished in 265ms. Jan 23 23:56:22.214213 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:56:22.228283 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:56:22.234174 systemd[1]: Startup finished in 1.176s (kernel) + 8.664s (initrd) + 8.806s (userspace) = 18.647s. Jan 23 23:56:22.452645 amazon-ssm-agent[2125]: 2026-01-23 23:56:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:56:22.555747 amazon-ssm-agent[2125]: 2026-01-23 23:56:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Jan 23 23:56:22.627714 systemd[1]: Started sshd@1-172.31.30.184:22-4.153.228.146:47912.service - OpenSSH per-connection server daemon (4.153.228.146:47912). Jan 23 23:56:22.655080 amazon-ssm-agent[2125]: 2026-01-23 23:56:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:56:23.135800 sshd[2276]: Accepted publickey for core from 4.153.228.146 port 47912 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:23.138258 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:23.146482 systemd-logind[2007]: New session 2 of user core. Jan 23 23:56:23.154292 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:56:23.317658 kubelet[2255]: E0123 23:56:23.317588 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:23.322549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:23.323169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:23.324083 systemd[1]: kubelet.service: Consumed 1.378s CPU time. Jan 23 23:56:23.494840 sshd[2276]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:23.501172 systemd[1]: sshd@1-172.31.30.184:22-4.153.228.146:47912.service: Deactivated successfully. Jan 23 23:56:23.501388 systemd-logind[2007]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:56:23.505169 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:56:23.508283 systemd-logind[2007]: Removed session 2. Jan 23 23:56:23.597431 systemd[1]: Started sshd@2-172.31.30.184:22-4.153.228.146:47924.service - OpenSSH per-connection server daemon (4.153.228.146:47924). Jan 23 23:56:24.136894 sshd[2288]: Accepted publickey for core from 4.153.228.146 port 47924 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:24.140110 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:24.149317 systemd-logind[2007]: New session 3 of user core. Jan 23 23:56:24.157280 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:56:24.507542 sshd[2288]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:24.514409 systemd[1]: sshd@2-172.31.30.184:22-4.153.228.146:47924.service: Deactivated successfully. Jan 23 23:56:24.519360 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:56:24.521288 systemd-logind[2007]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:56:24.523343 systemd-logind[2007]: Removed session 3. Jan 23 23:56:24.603102 systemd[1]: Started sshd@3-172.31.30.184:22-4.153.228.146:58476.service - OpenSSH per-connection server daemon (4.153.228.146:58476). Jan 23 23:56:25.093141 sshd[2295]: Accepted publickey for core from 4.153.228.146 port 58476 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:25.095731 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:25.103274 systemd-logind[2007]: New session 4 of user core. Jan 23 23:56:25.115328 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:56:25.448876 sshd[2295]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:25.454495 systemd[1]: sshd@3-172.31.30.184:22-4.153.228.146:58476.service: Deactivated successfully. Jan 23 23:56:25.455426 systemd-logind[2007]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:56:25.458516 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:56:25.462222 systemd-logind[2007]: Removed session 4. Jan 23 23:56:25.558543 systemd[1]: Started sshd@4-172.31.30.184:22-4.153.228.146:58482.service - OpenSSH per-connection server daemon (4.153.228.146:58482). Jan 23 23:56:26.106669 sshd[2303]: Accepted publickey for core from 4.153.228.146 port 58482 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:26.109479 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:26.119366 systemd-logind[2007]: New session 5 of user core. Jan 23 23:56:26.131281 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:56:26.427576 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:56:26.428285 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:26.446168 sudo[2306]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:26.531555 sshd[2303]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:26.537537 systemd-logind[2007]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:56:26.538153 systemd[1]: sshd@4-172.31.30.184:22-4.153.228.146:58482.service: Deactivated successfully. Jan 23 23:56:26.541783 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:56:26.545712 systemd-logind[2007]: Removed session 5. Jan 23 23:56:26.637543 systemd[1]: Started sshd@5-172.31.30.184:22-4.153.228.146:58498.service - OpenSSH per-connection server daemon (4.153.228.146:58498). Jan 23 23:56:27.172793 sshd[2311]: Accepted publickey for core from 4.153.228.146 port 58498 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:27.175530 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:27.184322 systemd-logind[2007]: New session 6 of user core. Jan 23 23:56:27.191305 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:56:27.474795 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:56:27.475499 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:27.481429 sudo[2315]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:27.492273 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:56:27.492937 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:27.520850 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:27.523824 auditctl[2318]: No rules Jan 23 23:56:27.524545 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:56:27.524901 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:27.537855 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:27.580674 augenrules[2336]: No rules Jan 23 23:56:27.583548 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:27.585682 sudo[2314]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:27.670932 sshd[2311]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:27.678556 systemd[1]: sshd@5-172.31.30.184:22-4.153.228.146:58498.service: Deactivated successfully. Jan 23 23:56:27.682264 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:56:27.683784 systemd-logind[2007]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:56:27.685609 systemd-logind[2007]: Removed session 6. Jan 23 23:56:27.765657 systemd[1]: Started sshd@6-172.31.30.184:22-4.153.228.146:58506.service - OpenSSH per-connection server daemon (4.153.228.146:58506). Jan 23 23:56:28.304570 sshd[2344]: Accepted publickey for core from 4.153.228.146 port 58506 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:28.307162 sshd[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:28.315634 systemd-logind[2007]: New session 7 of user core. Jan 23 23:56:28.324271 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:56:28.602645 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:56:28.603329 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:29.113499 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:56:29.115765 (dockerd)[2362]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:56:29.521299 dockerd[2362]: time="2026-01-23T23:56:29.521208776Z" level=info msg="Starting up" Jan 23 23:56:29.657910 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1663965863-merged.mount: Deactivated successfully. Jan 23 23:56:29.711087 dockerd[2362]: time="2026-01-23T23:56:29.710974617Z" level=info msg="Loading containers: start." Jan 23 23:56:29.870263 kernel: Initializing XFRM netlink socket Jan 23 23:56:29.903552 (udev-worker)[2386]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:29.998732 systemd-networkd[1917]: docker0: Link UP Jan 23 23:56:30.029589 dockerd[2362]: time="2026-01-23T23:56:30.029439976Z" level=info msg="Loading containers: done." Jan 23 23:56:30.051637 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1339505080-merged.mount: Deactivated successfully. Jan 23 23:56:30.060995 dockerd[2362]: time="2026-01-23T23:56:30.060929842Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:56:30.061331 dockerd[2362]: time="2026-01-23T23:56:30.061110388Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:56:30.061331 dockerd[2362]: time="2026-01-23T23:56:30.061315102Z" level=info msg="Daemon has completed initialization" Jan 23 23:56:30.136924 dockerd[2362]: time="2026-01-23T23:56:30.136271406Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:56:30.138197 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:56:31.465935 containerd[2034]: time="2026-01-23T23:56:31.465874433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:56:32.081858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088038256.mount: Deactivated successfully. Jan 23 23:56:33.515894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:56:33.524054 containerd[2034]: time="2026-01-23T23:56:33.520166108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:33.526311 containerd[2034]: time="2026-01-23T23:56:33.524449260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 23:56:33.526311 containerd[2034]: time="2026-01-23T23:56:33.525853458Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:33.524840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:33.532197 containerd[2034]: time="2026-01-23T23:56:33.532140052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:33.534847 containerd[2034]: time="2026-01-23T23:56:33.534759989Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.068820171s" Jan 23 23:56:33.534847 containerd[2034]: time="2026-01-23T23:56:33.534832529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:56:33.535873 containerd[2034]: time="2026-01-23T23:56:33.535800069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:56:33.890058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:33.906611 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:33.989665 kubelet[2568]: E0123 23:56:33.989571 2568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:33.997252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:33.997643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:34.973068 containerd[2034]: time="2026-01-23T23:56:34.972676958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:34.976054 containerd[2034]: time="2026-01-23T23:56:34.975949386Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 23:56:34.979038 containerd[2034]: time="2026-01-23T23:56:34.977946224Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:34.983390 containerd[2034]: time="2026-01-23T23:56:34.983319077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:34.986591 containerd[2034]: time="2026-01-23T23:56:34.986509059Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.450647075s" Jan 23 23:56:34.986787 containerd[2034]: time="2026-01-23T23:56:34.986756082Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:56:34.988343 containerd[2034]: time="2026-01-23T23:56:34.988290869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:56:36.150936 containerd[2034]: time="2026-01-23T23:56:36.150879125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:36.154641 containerd[2034]: time="2026-01-23T23:56:36.154588115Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 23:56:36.157205 containerd[2034]: time="2026-01-23T23:56:36.157135788Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:36.164919 containerd[2034]: time="2026-01-23T23:56:36.163083512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:36.165609 containerd[2034]: time="2026-01-23T23:56:36.165557769Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.176994267s" Jan 23 23:56:36.165737 containerd[2034]: time="2026-01-23T23:56:36.165707868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:56:36.166620 containerd[2034]: time="2026-01-23T23:56:36.166579539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:56:37.501771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073576983.mount: Deactivated successfully. Jan 23 23:56:38.125136 containerd[2034]: time="2026-01-23T23:56:38.124139184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:38.126293 containerd[2034]: time="2026-01-23T23:56:38.126224374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:56:38.128976 containerd[2034]: time="2026-01-23T23:56:38.128904233Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:38.134751 containerd[2034]: time="2026-01-23T23:56:38.134671459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:38.136420 containerd[2034]: time="2026-01-23T23:56:38.136061862Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.969268327s" Jan 23 23:56:38.136420 containerd[2034]: time="2026-01-23T23:56:38.136122661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:56:38.137267 containerd[2034]: time="2026-01-23T23:56:38.136978424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:56:38.694335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473880589.mount: Deactivated successfully. Jan 23 23:56:40.000046 containerd[2034]: time="2026-01-23T23:56:39.998788669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:40.001787 containerd[2034]: time="2026-01-23T23:56:40.001738891Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 23:56:40.002313 containerd[2034]: time="2026-01-23T23:56:40.002275007Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:40.008501 containerd[2034]: time="2026-01-23T23:56:40.008431660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:40.011315 containerd[2034]: time="2026-01-23T23:56:40.011256720Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.874200989s" Jan 23 23:56:40.011539 containerd[2034]: time="2026-01-23T23:56:40.011506493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:56:40.013797 containerd[2034]: time="2026-01-23T23:56:40.013529348Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:56:40.466835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808147147.mount: Deactivated successfully. Jan 23 23:56:40.475083 containerd[2034]: time="2026-01-23T23:56:40.474997538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:40.477678 containerd[2034]: time="2026-01-23T23:56:40.477544263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:56:40.480027 containerd[2034]: time="2026-01-23T23:56:40.478624118Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:40.483493 containerd[2034]: time="2026-01-23T23:56:40.483447049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:40.485531 containerd[2034]: time="2026-01-23T23:56:40.485437787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 471.850931ms" Jan 23 23:56:40.485722 containerd[2034]: time="2026-01-23T23:56:40.485523174Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:56:40.487579 containerd[2034]: time="2026-01-23T23:56:40.487535488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:56:41.093347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591523857.mount: Deactivated successfully. Jan 23 23:56:43.860202 containerd[2034]: time="2026-01-23T23:56:43.859307623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:43.861723 containerd[2034]: time="2026-01-23T23:56:43.861479640Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 23:56:43.865301 containerd[2034]: time="2026-01-23T23:56:43.865231407Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:43.877142 containerd[2034]: time="2026-01-23T23:56:43.877071100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:43.881906 containerd[2034]: time="2026-01-23T23:56:43.881520524Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.393628974s" Jan 23 23:56:43.881906 containerd[2034]: time="2026-01-23T23:56:43.881586797Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:56:44.015906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:56:44.027367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:44.729486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:44.733494 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:44.819073 kubelet[2729]: E0123 23:56:44.818163 2729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:44.824323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:44.824834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:49.514561 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:56:51.954524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:51.966515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:52.023253 systemd[1]: Reloading requested from client PID 2746 ('systemctl') (unit session-7.scope)... Jan 23 23:56:52.023287 systemd[1]: Reloading... Jan 23 23:56:52.247044 zram_generator::config[2786]: No configuration found. Jan 23 23:56:52.496989 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:52.670405 systemd[1]: Reloading finished in 646 ms. Jan 23 23:56:52.764996 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:56:52.765276 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:56:52.765830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:52.773663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:53.105852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:53.123708 (kubelet)[2850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:53.198420 kubelet[2850]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:53.200030 kubelet[2850]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:53.200030 kubelet[2850]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:53.200030 kubelet[2850]: I0123 23:56:53.199073 2850 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:53.578063 kubelet[2850]: I0123 23:56:53.577738 2850 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:56:53.578063 kubelet[2850]: I0123 23:56:53.577783 2850 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:53.578328 kubelet[2850]: I0123 23:56:53.578287 2850 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:56:53.626348 kubelet[2850]: I0123 23:56:53.626098 2850 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:53.626525 kubelet[2850]: E0123 23:56:53.626473 2850 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.184:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:53.643039 kubelet[2850]: E0123 23:56:53.642320 2850 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:53.643039 kubelet[2850]: I0123 23:56:53.642370 2850 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:53.647711 kubelet[2850]: I0123 23:56:53.647625 2850 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:53.649382 kubelet[2850]: I0123 23:56:53.649295 2850 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:53.649672 kubelet[2850]: I0123 23:56:53.649370 2850 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-184","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:56:53.649851 kubelet[2850]: I0123 23:56:53.649823 2850 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:53.649922 kubelet[2850]: I0123 23:56:53.649853 2850 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:56:53.650351 kubelet[2850]: I0123 23:56:53.650292 2850 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:53.656952 kubelet[2850]: I0123 23:56:53.656889 2850 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:56:53.657186 kubelet[2850]: I0123 23:56:53.657142 2850 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:53.657258 kubelet[2850]: I0123 23:56:53.657192 2850 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:56:53.657258 kubelet[2850]: I0123 23:56:53.657214 2850 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:53.661855 kubelet[2850]: W0123 23:56:53.660115 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-184&limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:53.661855 kubelet[2850]: E0123 23:56:53.660214 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-184&limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:53.664776 kubelet[2850]: I0123 23:56:53.664737 2850 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:53.666107 kubelet[2850]: I0123 23:56:53.666075 2850 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:56:53.666494 kubelet[2850]: W0123 23:56:53.666473 2850 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:56:53.668298 kubelet[2850]: I0123 23:56:53.668266 2850 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:53.668461 kubelet[2850]: I0123 23:56:53.668444 2850 server.go:1287] "Started kubelet" Jan 23 23:56:53.668829 kubelet[2850]: W0123 23:56:53.668749 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:53.668997 kubelet[2850]: E0123 23:56:53.668961 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:53.679186 kubelet[2850]: I0123 23:56:53.679150 2850 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:53.687633 kubelet[2850]: I0123 23:56:53.687120 2850 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:53.688882 kubelet[2850]: I0123 23:56:53.688809 2850 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:56:53.690202 kubelet[2850]: I0123 23:56:53.690166 2850 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:53.691908 kubelet[2850]: E0123 23:56:53.690891 2850 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-184\" not found" Jan 23 23:56:53.691908 kubelet[2850]: I0123 23:56:53.691432 2850 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:53.691908 kubelet[2850]: I0123 23:56:53.691517 2850 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:53.692317 kubelet[2850]: I0123 23:56:53.692223 2850 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:53.692700 kubelet[2850]: I0123 23:56:53.692648 2850 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:53.693307 kubelet[2850]: I0123 23:56:53.693211 2850 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:53.695598 kubelet[2850]: E0123 23:56:53.695093 2850 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.184:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.184:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-184.188d817f99d89620 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-184,UID:ip-172-31-30-184,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-184,},FirstTimestamp:2026-01-23 23:56:53.668410912 +0000 UTC m=+0.538463678,LastTimestamp:2026-01-23 23:56:53.668410912 +0000 UTC m=+0.538463678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-184,}" Jan 23 23:56:53.695831 kubelet[2850]: W0123 23:56:53.695739 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:53.695831 kubelet[2850]: E0123 23:56:53.695822 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:53.696045 kubelet[2850]: E0123 23:56:53.695949 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-184?timeout=10s\": dial tcp 172.31.30.184:6443: connect: connection refused" interval="200ms" Jan 23 23:56:53.699746 kubelet[2850]: E0123 23:56:53.699595 2850 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:56:53.702096 kubelet[2850]: I0123 23:56:53.701194 2850 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:56:53.702096 kubelet[2850]: I0123 23:56:53.701229 2850 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:56:53.702096 kubelet[2850]: I0123 23:56:53.701397 2850 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:53.733455 kubelet[2850]: I0123 23:56:53.733397 2850 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:53.740317 kubelet[2850]: I0123 23:56:53.740253 2850 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:53.740317 kubelet[2850]: I0123 23:56:53.740309 2850 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:56:53.740509 kubelet[2850]: I0123 23:56:53.740347 2850 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:53.740509 kubelet[2850]: I0123 23:56:53.740363 2850 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:56:53.740509 kubelet[2850]: E0123 23:56:53.740440 2850 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:53.753111 kubelet[2850]: I0123 23:56:53.752335 2850 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:53.753111 kubelet[2850]: I0123 23:56:53.752379 2850 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:53.753111 kubelet[2850]: I0123 23:56:53.752415 2850 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:53.755777 kubelet[2850]: W0123 23:56:53.755651 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:53.755777 kubelet[2850]: E0123 23:56:53.755767 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:53.758609 kubelet[2850]: I0123 23:56:53.758560 2850 policy_none.go:49] "None policy: Start" Jan 23 23:56:53.758700 kubelet[2850]: I0123 23:56:53.758616 2850 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:53.758700 kubelet[2850]: I0123 23:56:53.758642 2850 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:53.771316 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:56:53.790945 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:56:53.791897 kubelet[2850]: E0123 23:56:53.791854 2850 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-184\" not found" Jan 23 23:56:53.799159 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:56:53.809959 kubelet[2850]: I0123 23:56:53.809911 2850 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:56:53.810584 kubelet[2850]: I0123 23:56:53.810554 2850 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:53.810698 kubelet[2850]: I0123 23:56:53.810587 2850 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:53.811349 kubelet[2850]: I0123 23:56:53.811061 2850 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:53.814665 kubelet[2850]: E0123 23:56:53.814476 2850 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:53.814665 kubelet[2850]: E0123 23:56:53.814543 2850 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-184\" not found" Jan 23 23:56:53.859665 systemd[1]: Created slice kubepods-burstable-poda3cc6defa2cf33786d1bba0d9787e0c4.slice - libcontainer container kubepods-burstable-poda3cc6defa2cf33786d1bba0d9787e0c4.slice. Jan 23 23:56:53.878707 kubelet[2850]: E0123 23:56:53.878615 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:53.886615 systemd[1]: Created slice kubepods-burstable-podf8630126ce9b770b7d2ac9364c34d383.slice - libcontainer container kubepods-burstable-podf8630126ce9b770b7d2ac9364c34d383.slice. Jan 23 23:56:53.891591 kubelet[2850]: E0123 23:56:53.891548 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:53.897837 kubelet[2850]: E0123 23:56:53.897744 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-184?timeout=10s\": dial tcp 172.31.30.184:6443: connect: connection refused" interval="400ms" Jan 23 23:56:53.900075 systemd[1]: Created slice kubepods-burstable-pod7acbdb4f8310fc79033a7af81f0b4769.slice - libcontainer container kubepods-burstable-pod7acbdb4f8310fc79033a7af81f0b4769.slice. Jan 23 23:56:53.904068 kubelet[2850]: E0123 23:56:53.903860 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:53.914129 kubelet[2850]: I0123 23:56:53.914080 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-184" Jan 23 23:56:53.914820 kubelet[2850]: E0123 23:56:53.914752 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.184:6443/api/v1/nodes\": dial tcp 172.31.30.184:6443: connect: connection refused" node="ip-172-31-30-184" Jan 23 23:56:53.992397 kubelet[2850]: I0123 23:56:53.992319 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3cc6defa2cf33786d1bba0d9787e0c4-ca-certs\") pod \"kube-apiserver-ip-172-31-30-184\" (UID: \"a3cc6defa2cf33786d1bba0d9787e0c4\") " pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:56:53.992397 kubelet[2850]: I0123 23:56:53.992392 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3cc6defa2cf33786d1bba0d9787e0c4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-184\" (UID: \"a3cc6defa2cf33786d1bba0d9787e0c4\") " pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:56:53.992596 kubelet[2850]: I0123 23:56:53.992436 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:56:53.992596 kubelet[2850]: I0123 23:56:53.992471 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:56:53.992596 kubelet[2850]: I0123 23:56:53.992509 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:56:53.992596 kubelet[2850]: I0123 23:56:53.992545 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7acbdb4f8310fc79033a7af81f0b4769-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-184\" (UID: \"7acbdb4f8310fc79033a7af81f0b4769\") " pod="kube-system/kube-scheduler-ip-172-31-30-184" Jan 23 23:56:53.992596 kubelet[2850]: I0123 23:56:53.992578 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:56:53.992847 kubelet[2850]: I0123 23:56:53.992612 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:56:53.992847 kubelet[2850]: I0123 23:56:53.992649 2850 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3cc6defa2cf33786d1bba0d9787e0c4-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-184\" (UID: \"a3cc6defa2cf33786d1bba0d9787e0c4\") " pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:56:54.117820 kubelet[2850]: I0123 23:56:54.117679 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-184" Jan 23 23:56:54.118273 kubelet[2850]: E0123 23:56:54.118203 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.184:6443/api/v1/nodes\": dial tcp 172.31.30.184:6443: connect: connection refused" node="ip-172-31-30-184" Jan 23 23:56:54.181469 containerd[2034]: time="2026-01-23T23:56:54.181096422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-184,Uid:a3cc6defa2cf33786d1bba0d9787e0c4,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:54.193078 containerd[2034]: time="2026-01-23T23:56:54.192813221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-184,Uid:f8630126ce9b770b7d2ac9364c34d383,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:54.205682 containerd[2034]: time="2026-01-23T23:56:54.205609937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-184,Uid:7acbdb4f8310fc79033a7af81f0b4769,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:54.299084 kubelet[2850]: E0123 23:56:54.298956 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-184?timeout=10s\": dial tcp 172.31.30.184:6443: connect: connection refused" interval="800ms" Jan 23 23:56:54.521877 kubelet[2850]: I0123 23:56:54.521408 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-184" Jan 23 23:56:54.522354 kubelet[2850]: E0123 23:56:54.522277 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.184:6443/api/v1/nodes\": dial tcp 172.31.30.184:6443: connect: connection refused" node="ip-172-31-30-184" Jan 23 23:56:54.695180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034666355.mount: Deactivated successfully. Jan 23 23:56:54.706085 containerd[2034]: time="2026-01-23T23:56:54.705997869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:54.713381 containerd[2034]: time="2026-01-23T23:56:54.713313629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:56:54.714575 kubelet[2850]: W0123 23:56:54.714431 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:54.714575 kubelet[2850]: E0123 23:56:54.714520 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.184:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:54.715247 containerd[2034]: time="2026-01-23T23:56:54.715181787Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:54.718386 containerd[2034]: time="2026-01-23T23:56:54.718319867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:54.720974 containerd[2034]: time="2026-01-23T23:56:54.720909225Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:54.723324 containerd[2034]: time="2026-01-23T23:56:54.723269437Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:54.724695 containerd[2034]: time="2026-01-23T23:56:54.724638649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:54.727038 kubelet[2850]: W0123 23:56:54.726802 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-184&limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:54.727038 kubelet[2850]: E0123 23:56:54.726961 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.184:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-184&limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:54.728844 containerd[2034]: time="2026-01-23T23:56:54.728658630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:54.733286 containerd[2034]: time="2026-01-23T23:56:54.733223035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.014837ms" Jan 23 23:56:54.738977 containerd[2034]: time="2026-01-23T23:56:54.738893001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.961184ms" Jan 23 23:56:54.748052 containerd[2034]: time="2026-01-23T23:56:54.747567468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.820363ms" Jan 23 23:56:54.802255 kubelet[2850]: W0123 23:56:54.802045 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:54.802255 kubelet[2850]: E0123 23:56:54.802113 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.184:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:54.947427 containerd[2034]: time="2026-01-23T23:56:54.947246651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:54.947427 containerd[2034]: time="2026-01-23T23:56:54.947348618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:54.947923 containerd[2034]: time="2026-01-23T23:56:54.947414639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:54.947923 containerd[2034]: time="2026-01-23T23:56:54.947614815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:54.950932 containerd[2034]: time="2026-01-23T23:56:54.950694198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:54.950932 containerd[2034]: time="2026-01-23T23:56:54.950780401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:54.951842 containerd[2034]: time="2026-01-23T23:56:54.951356725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:54.952487 containerd[2034]: time="2026-01-23T23:56:54.952336006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:54.954633 containerd[2034]: time="2026-01-23T23:56:54.953414001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:54.954633 containerd[2034]: time="2026-01-23T23:56:54.953563884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:54.954633 containerd[2034]: time="2026-01-23T23:56:54.953605845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:54.954633 containerd[2034]: time="2026-01-23T23:56:54.953852028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:55.006412 systemd[1]: Started cri-containerd-1e4ca058ddcb5229531710960239502550f624d015ee79d504ae346d441552f7.scope - libcontainer container 1e4ca058ddcb5229531710960239502550f624d015ee79d504ae346d441552f7. Jan 23 23:56:55.011417 systemd[1]: Started cri-containerd-5682a7b0fac1f0c560e91959fae60823b53813b25f286e22495a3a7b1cea6046.scope - libcontainer container 5682a7b0fac1f0c560e91959fae60823b53813b25f286e22495a3a7b1cea6046. Jan 23 23:56:55.021501 systemd[1]: Started cri-containerd-6114e646b06ef10949d53fc031965cdb5f386a4c407a2c8fbbf0ce6f07e3bb94.scope - libcontainer container 6114e646b06ef10949d53fc031965cdb5f386a4c407a2c8fbbf0ce6f07e3bb94. Jan 23 23:56:55.101384 kubelet[2850]: E0123 23:56:55.100476 2850 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-184?timeout=10s\": dial tcp 172.31.30.184:6443: connect: connection refused" interval="1.6s" Jan 23 23:56:55.124411 containerd[2034]: time="2026-01-23T23:56:55.123278372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-184,Uid:f8630126ce9b770b7d2ac9364c34d383,Namespace:kube-system,Attempt:0,} returns sandbox id \"6114e646b06ef10949d53fc031965cdb5f386a4c407a2c8fbbf0ce6f07e3bb94\"" Jan 23 23:56:55.140055 containerd[2034]: time="2026-01-23T23:56:55.138255833Z" level=info msg="CreateContainer within sandbox \"6114e646b06ef10949d53fc031965cdb5f386a4c407a2c8fbbf0ce6f07e3bb94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:56:55.159678 containerd[2034]: time="2026-01-23T23:56:55.159624112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-184,Uid:a3cc6defa2cf33786d1bba0d9787e0c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e4ca058ddcb5229531710960239502550f624d015ee79d504ae346d441552f7\"" Jan 23 23:56:55.168842 containerd[2034]: time="2026-01-23T23:56:55.168789973Z" level=info msg="CreateContainer within sandbox \"1e4ca058ddcb5229531710960239502550f624d015ee79d504ae346d441552f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:56:55.171264 containerd[2034]: time="2026-01-23T23:56:55.171208173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-184,Uid:7acbdb4f8310fc79033a7af81f0b4769,Namespace:kube-system,Attempt:0,} returns sandbox id \"5682a7b0fac1f0c560e91959fae60823b53813b25f286e22495a3a7b1cea6046\"" Jan 23 23:56:55.178900 containerd[2034]: time="2026-01-23T23:56:55.178841204Z" level=info msg="CreateContainer within sandbox \"5682a7b0fac1f0c560e91959fae60823b53813b25f286e22495a3a7b1cea6046\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:56:55.188730 containerd[2034]: time="2026-01-23T23:56:55.188519120Z" level=info msg="CreateContainer within sandbox \"6114e646b06ef10949d53fc031965cdb5f386a4c407a2c8fbbf0ce6f07e3bb94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa\"" Jan 23 23:56:55.190719 containerd[2034]: time="2026-01-23T23:56:55.190646175Z" level=info msg="StartContainer for \"263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa\"" Jan 23 23:56:55.221744 containerd[2034]: time="2026-01-23T23:56:55.221659763Z" level=info msg="CreateContainer within sandbox \"1e4ca058ddcb5229531710960239502550f624d015ee79d504ae346d441552f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b30b430dea499af4e8e106639b3cebd41600abd52fc1026632bf98cb90bc394\"" Jan 23 23:56:55.222726 containerd[2034]: time="2026-01-23T23:56:55.222466362Z" level=info msg="StartContainer for \"6b30b430dea499af4e8e106639b3cebd41600abd52fc1026632bf98cb90bc394\"" Jan 23 23:56:55.228797 kubelet[2850]: W0123 23:56:55.228617 2850 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.184:6443: connect: connection refused Jan 23 23:56:55.228797 kubelet[2850]: E0123 23:56:55.228754 2850 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.184:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.184:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:55.232978 containerd[2034]: time="2026-01-23T23:56:55.232805893Z" level=info msg="CreateContainer within sandbox \"5682a7b0fac1f0c560e91959fae60823b53813b25f286e22495a3a7b1cea6046\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6\"" Jan 23 23:56:55.234277 containerd[2034]: time="2026-01-23T23:56:55.234231161Z" level=info msg="StartContainer for \"d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6\"" Jan 23 23:56:55.259696 systemd[1]: Started cri-containerd-263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa.scope - libcontainer container 263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa. Jan 23 23:56:55.298962 systemd[1]: Started cri-containerd-6b30b430dea499af4e8e106639b3cebd41600abd52fc1026632bf98cb90bc394.scope - libcontainer container 6b30b430dea499af4e8e106639b3cebd41600abd52fc1026632bf98cb90bc394. Jan 23 23:56:55.331337 kubelet[2850]: I0123 23:56:55.330890 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-184" Jan 23 23:56:55.331874 kubelet[2850]: E0123 23:56:55.331668 2850 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.184:6443/api/v1/nodes\": dial tcp 172.31.30.184:6443: connect: connection refused" node="ip-172-31-30-184" Jan 23 23:56:55.337342 systemd[1]: Started cri-containerd-d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6.scope - libcontainer container d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6. Jan 23 23:56:55.387882 containerd[2034]: time="2026-01-23T23:56:55.385464801Z" level=info msg="StartContainer for \"263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa\" returns successfully" Jan 23 23:56:55.443768 containerd[2034]: time="2026-01-23T23:56:55.443693470Z" level=info msg="StartContainer for \"6b30b430dea499af4e8e106639b3cebd41600abd52fc1026632bf98cb90bc394\" returns successfully" Jan 23 23:56:55.507055 containerd[2034]: time="2026-01-23T23:56:55.506936480Z" level=info msg="StartContainer for \"d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6\" returns successfully" Jan 23 23:56:55.764972 kubelet[2850]: E0123 23:56:55.764839 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:55.772982 kubelet[2850]: E0123 23:56:55.772937 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:55.778261 kubelet[2850]: E0123 23:56:55.777023 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:56.785051 kubelet[2850]: E0123 23:56:56.784331 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:56.785622 kubelet[2850]: E0123 23:56:56.785097 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:56:56.933710 kubelet[2850]: I0123 23:56:56.933673 2850 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-184" Jan 23 23:57:00.666122 kubelet[2850]: I0123 23:57:00.666063 2850 apiserver.go:52] "Watching apiserver" Jan 23 23:57:00.791586 kubelet[2850]: I0123 23:57:00.791506 2850 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:57:00.802034 kubelet[2850]: E0123 23:57:00.801329 2850 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:57:00.860051 kubelet[2850]: E0123 23:57:00.859980 2850 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-184\" not found" node="ip-172-31-30-184" Jan 23 23:57:00.965366 kubelet[2850]: I0123 23:57:00.964130 2850 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-184" Jan 23 23:57:00.965366 kubelet[2850]: E0123 23:57:00.964192 2850 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-184\": node \"ip-172-31-30-184\" not found" Jan 23 23:57:00.992022 kubelet[2850]: I0123 23:57:00.991927 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:01.039858 kubelet[2850]: E0123 23:57:01.039796 2850 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-184\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:01.039858 kubelet[2850]: I0123 23:57:01.039848 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:01.043594 kubelet[2850]: E0123 23:57:01.043340 2850 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-184\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:01.043594 kubelet[2850]: I0123 23:57:01.043390 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-184" Jan 23 23:57:01.049875 kubelet[2850]: E0123 23:57:01.049789 2850 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-184\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-184" Jan 23 23:57:01.917480 kubelet[2850]: I0123 23:57:01.917158 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:02.813704 kubelet[2850]: I0123 23:57:02.813647 2850 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-184" Jan 23 23:57:03.595070 update_engine[2008]: I20260123 23:57:03.594134 2008 update_attempter.cc:509] Updating boot flags... Jan 23 23:57:03.603146 systemd[1]: Reloading requested from client PID 3125 ('systemctl') (unit session-7.scope)... Jan 23 23:57:03.603647 systemd[1]: Reloading... Jan 23 23:57:03.827121 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3156) Jan 23 23:57:03.859834 kubelet[2850]: I0123 23:57:03.858532 2850 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-184" podStartSLOduration=2.858508501 podStartE2EDuration="2.858508501s" podCreationTimestamp="2026-01-23 23:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:03.835235182 +0000 UTC m=+10.705287924" watchObservedRunningTime="2026-01-23 23:57:03.858508501 +0000 UTC m=+10.728561231" Jan 23 23:57:03.964149 zram_generator::config[3204]: No configuration found. Jan 23 23:57:04.187188 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3132) Jan 23 23:57:04.324777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:04.480213 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3132) Jan 23 23:57:04.603331 systemd[1]: Reloading finished in 998 ms. Jan 23 23:57:04.846369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:04.871929 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:57:04.874121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:04.874213 systemd[1]: kubelet.service: Consumed 1.371s CPU time, 129.0M memory peak, 0B memory swap peak. Jan 23 23:57:04.887666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:05.298367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:05.318709 (kubelet)[3494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:57:05.417019 kubelet[3494]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:57:05.417019 kubelet[3494]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:57:05.417019 kubelet[3494]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:57:05.418156 kubelet[3494]: I0123 23:57:05.417177 3494 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:57:05.440950 kubelet[3494]: I0123 23:57:05.439276 3494 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:57:05.440950 kubelet[3494]: I0123 23:57:05.439337 3494 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:57:05.440950 kubelet[3494]: I0123 23:57:05.439870 3494 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:57:05.442532 kubelet[3494]: I0123 23:57:05.442484 3494 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:57:05.447593 kubelet[3494]: I0123 23:57:05.447524 3494 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:57:05.458451 kubelet[3494]: E0123 23:57:05.458228 3494 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:57:05.458451 kubelet[3494]: I0123 23:57:05.458278 3494 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:57:05.468206 kubelet[3494]: I0123 23:57:05.466084 3494 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:57:05.468206 kubelet[3494]: I0123 23:57:05.466510 3494 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:57:05.468206 kubelet[3494]: I0123 23:57:05.466560 3494 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-184","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:57:05.468206 kubelet[3494]: I0123 23:57:05.466847 3494 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:57:05.468946 kubelet[3494]: I0123 23:57:05.466866 3494 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:57:05.468946 kubelet[3494]: I0123 23:57:05.466949 3494 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:05.468946 kubelet[3494]: I0123 23:57:05.467247 3494 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:57:05.468946 kubelet[3494]: I0123 23:57:05.467272 3494 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:57:05.468946 kubelet[3494]: I0123 23:57:05.467304 3494 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:57:05.468946 kubelet[3494]: I0123 23:57:05.467324 3494 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:57:05.471256 kubelet[3494]: I0123 23:57:05.471196 3494 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:57:05.482516 sudo[3509]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 23:57:05.484936 kubelet[3494]: I0123 23:57:05.483628 3494 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:57:05.484448 sudo[3509]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 23:57:05.490790 kubelet[3494]: I0123 23:57:05.485661 3494 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:57:05.490790 kubelet[3494]: I0123 23:57:05.485724 3494 server.go:1287] "Started kubelet" Jan 23 23:57:05.497166 kubelet[3494]: I0123 23:57:05.494887 3494 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:57:05.504729 kubelet[3494]: I0123 23:57:05.502618 3494 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:57:05.504729 kubelet[3494]: I0123 23:57:05.504446 3494 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:57:05.508622 kubelet[3494]: I0123 23:57:05.508525 3494 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:57:05.509051 kubelet[3494]: I0123 23:57:05.508909 3494 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:57:05.513625 kubelet[3494]: I0123 23:57:05.513570 3494 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:57:05.519911 kubelet[3494]: I0123 23:57:05.519859 3494 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:57:05.520340 kubelet[3494]: E0123 23:57:05.520294 3494 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-184\" not found" Jan 23 23:57:05.546786 kubelet[3494]: I0123 23:57:05.540303 3494 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:57:05.546786 kubelet[3494]: I0123 23:57:05.540554 3494 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:57:05.556126 kubelet[3494]: I0123 23:57:05.553297 3494 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:57:05.556126 kubelet[3494]: I0123 23:57:05.556051 3494 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:57:05.572050 kubelet[3494]: I0123 23:57:05.568919 3494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:57:05.575578 kubelet[3494]: I0123 23:57:05.575517 3494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:57:05.575578 kubelet[3494]: I0123 23:57:05.575568 3494 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:57:05.575775 kubelet[3494]: I0123 23:57:05.575603 3494 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:57:05.575775 kubelet[3494]: I0123 23:57:05.575645 3494 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:57:05.575775 kubelet[3494]: E0123 23:57:05.575717 3494 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:57:05.610804 kubelet[3494]: I0123 23:57:05.608916 3494 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:57:05.623742 kubelet[3494]: E0123 23:57:05.623688 3494 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:57:05.676069 kubelet[3494]: E0123 23:57:05.676024 3494 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:57:05.758372 kubelet[3494]: I0123 23:57:05.758285 3494 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:57:05.758372 kubelet[3494]: I0123 23:57:05.758324 3494 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:57:05.758372 kubelet[3494]: I0123 23:57:05.758362 3494 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:05.760297 kubelet[3494]: I0123 23:57:05.758635 3494 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:57:05.760297 kubelet[3494]: I0123 23:57:05.758669 3494 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:57:05.760297 kubelet[3494]: I0123 23:57:05.758707 3494 policy_none.go:49] "None policy: Start" Jan 23 23:57:05.760297 kubelet[3494]: I0123 23:57:05.758726 3494 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:57:05.760297 kubelet[3494]: I0123 23:57:05.758748 3494 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:57:05.760297 kubelet[3494]: I0123 23:57:05.758956 3494 state_mem.go:75] "Updated machine memory state" Jan 23 23:57:05.770521 kubelet[3494]: I0123 23:57:05.769812 3494 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:57:05.770521 kubelet[3494]: I0123 23:57:05.770113 3494 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:57:05.770521 kubelet[3494]: I0123 23:57:05.770132 3494 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:57:05.781834 kubelet[3494]: I0123 23:57:05.770992 3494 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:57:05.781834 kubelet[3494]: E0123 23:57:05.777370 3494 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:57:05.877584 kubelet[3494]: I0123 23:57:05.877421 3494 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-184" Jan 23 23:57:05.879238 kubelet[3494]: I0123 23:57:05.879195 3494 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:05.879822 kubelet[3494]: I0123 23:57:05.879769 3494 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:05.891964 kubelet[3494]: E0123 23:57:05.891769 3494 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-184\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:05.901030 kubelet[3494]: I0123 23:57:05.898631 3494 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-184" Jan 23 23:57:05.901879 kubelet[3494]: E0123 23:57:05.901793 3494 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-184\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-184" Jan 23 23:57:05.923581 kubelet[3494]: I0123 23:57:05.923502 3494 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-184" Jan 23 23:57:05.923711 kubelet[3494]: I0123 23:57:05.923665 3494 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-184" Jan 23 23:57:05.944627 kubelet[3494]: I0123 23:57:05.944559 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7acbdb4f8310fc79033a7af81f0b4769-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-184\" (UID: \"7acbdb4f8310fc79033a7af81f0b4769\") " pod="kube-system/kube-scheduler-ip-172-31-30-184" Jan 23 23:57:05.944784 kubelet[3494]: I0123 23:57:05.944664 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:05.944784 kubelet[3494]: I0123 23:57:05.944745 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3cc6defa2cf33786d1bba0d9787e0c4-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-184\" (UID: \"a3cc6defa2cf33786d1bba0d9787e0c4\") " pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:05.945299 kubelet[3494]: I0123 23:57:05.944937 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3cc6defa2cf33786d1bba0d9787e0c4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-184\" (UID: \"a3cc6defa2cf33786d1bba0d9787e0c4\") " pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:05.945299 kubelet[3494]: I0123 23:57:05.945024 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:05.945299 kubelet[3494]: I0123 23:57:05.945081 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:05.945299 kubelet[3494]: I0123 23:57:05.945129 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:05.945299 kubelet[3494]: I0123 23:57:05.945174 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8630126ce9b770b7d2ac9364c34d383-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-184\" (UID: \"f8630126ce9b770b7d2ac9364c34d383\") " pod="kube-system/kube-controller-manager-ip-172-31-30-184" Jan 23 23:57:05.945610 kubelet[3494]: I0123 23:57:05.945223 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3cc6defa2cf33786d1bba0d9787e0c4-ca-certs\") pod \"kube-apiserver-ip-172-31-30-184\" (UID: \"a3cc6defa2cf33786d1bba0d9787e0c4\") " pod="kube-system/kube-apiserver-ip-172-31-30-184" Jan 23 23:57:06.448580 sudo[3509]: pam_unix(sudo:session): session closed for user root Jan 23 23:57:06.469128 kubelet[3494]: I0123 23:57:06.469055 3494 apiserver.go:52] "Watching apiserver" Jan 23 23:57:06.541695 kubelet[3494]: I0123 23:57:06.541627 3494 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:57:06.602063 kubelet[3494]: I0123 23:57:06.601637 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-184" podStartSLOduration=1.601617117 podStartE2EDuration="1.601617117s" podCreationTimestamp="2026-01-23 23:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:06.601247032 +0000 UTC m=+1.271303272" watchObservedRunningTime="2026-01-23 23:57:06.601617117 +0000 UTC m=+1.271673345" Jan 23 23:57:08.868755 sudo[2347]: pam_unix(sudo:session): session closed for user root Jan 23 23:57:08.952443 sshd[2344]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:08.957985 systemd[1]: sshd@6-172.31.30.184:22-4.153.228.146:58506.service: Deactivated successfully. Jan 23 23:57:08.961955 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:57:08.962656 systemd[1]: session-7.scope: Consumed 11.443s CPU time, 151.1M memory peak, 0B memory swap peak. Jan 23 23:57:08.966336 systemd-logind[2007]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:57:08.968521 systemd-logind[2007]: Removed session 7. Jan 23 23:57:09.637717 kubelet[3494]: I0123 23:57:09.637656 3494 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:57:09.638807 containerd[2034]: time="2026-01-23T23:57:09.638627188Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:57:09.640032 kubelet[3494]: I0123 23:57:09.639130 3494 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:57:10.389744 systemd[1]: Created slice kubepods-besteffort-pod7d3cee9d_2d47_4bfe_8c38_36a9dd95f3eb.slice - libcontainer container kubepods-besteffort-pod7d3cee9d_2d47_4bfe_8c38_36a9dd95f3eb.slice. Jan 23 23:57:10.399263 kubelet[3494]: W0123 23:57:10.398705 3494 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-30-184" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-184' and this object Jan 23 23:57:10.399263 kubelet[3494]: E0123 23:57:10.398789 3494 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-30-184\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-30-184' and this object" logger="UnhandledError" Jan 23 23:57:10.399263 kubelet[3494]: I0123 23:57:10.398935 3494 status_manager.go:890] "Failed to get status for pod" podUID="7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb" pod="kube-system/kube-proxy-qphcq" err="pods \"kube-proxy-qphcq\" is forbidden: User \"system:node:ip-172-31-30-184\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-30-184' and this object" Jan 23 23:57:10.399263 kubelet[3494]: W0123 23:57:10.399184 3494 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-184" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-184' and this object Jan 23 23:57:10.399935 kubelet[3494]: E0123 23:57:10.399231 3494 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-30-184\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-30-184' and this object" logger="UnhandledError" Jan 23 23:57:10.439734 systemd[1]: Created slice kubepods-burstable-pod22027dd0_325f_4f76_bb82_619d4ce38ab7.slice - libcontainer container kubepods-burstable-pod22027dd0_325f_4f76_bb82_619d4ce38ab7.slice. Jan 23 23:57:10.473850 kubelet[3494]: I0123 23:57:10.473680 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-bpf-maps\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.473850 kubelet[3494]: I0123 23:57:10.473755 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb-kube-proxy\") pod \"kube-proxy-qphcq\" (UID: \"7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb\") " pod="kube-system/kube-proxy-qphcq" Jan 23 23:57:10.473850 kubelet[3494]: I0123 23:57:10.473794 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6q8l\" (UniqueName: \"kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-kube-api-access-m6q8l\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.473850 kubelet[3494]: I0123 23:57:10.473839 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-run\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474272 kubelet[3494]: I0123 23:57:10.473874 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-hostproc\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474272 kubelet[3494]: I0123 23:57:10.473913 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-cgroup\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474272 kubelet[3494]: I0123 23:57:10.473950 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cni-path\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474272 kubelet[3494]: I0123 23:57:10.473988 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-xtables-lock\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474272 kubelet[3494]: I0123 23:57:10.474079 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-etc-cni-netd\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474272 kubelet[3494]: I0123 23:57:10.474115 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-lib-modules\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474662 kubelet[3494]: I0123 23:57:10.474152 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22027dd0-325f-4f76-bb82-619d4ce38ab7-clustermesh-secrets\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474662 kubelet[3494]: I0123 23:57:10.474188 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-config-path\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474662 kubelet[3494]: I0123 23:57:10.474231 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-net\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.474662 kubelet[3494]: I0123 23:57:10.474271 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb-xtables-lock\") pod \"kube-proxy-qphcq\" (UID: \"7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb\") " pod="kube-system/kube-proxy-qphcq" Jan 23 23:57:10.474662 kubelet[3494]: I0123 23:57:10.474309 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-hubble-tls\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.475060 kubelet[3494]: I0123 23:57:10.474345 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb-lib-modules\") pod \"kube-proxy-qphcq\" (UID: \"7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb\") " pod="kube-system/kube-proxy-qphcq" Jan 23 23:57:10.475060 kubelet[3494]: I0123 23:57:10.474383 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mcr5\" (UniqueName: \"kubernetes.io/projected/7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb-kube-api-access-5mcr5\") pod \"kube-proxy-qphcq\" (UID: \"7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb\") " pod="kube-system/kube-proxy-qphcq" Jan 23 23:57:10.475060 kubelet[3494]: I0123 23:57:10.474429 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-kernel\") pod \"cilium-ttrwj\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " pod="kube-system/cilium-ttrwj" Jan 23 23:57:10.681260 kubelet[3494]: I0123 23:57:10.681093 3494 status_manager.go:890] "Failed to get status for pod" podUID="eab42098-2c31-4390-ab72-1652dff74b60" pod="kube-system/cilium-operator-6c4d7847fc-2lsjr" err="pods \"cilium-operator-6c4d7847fc-2lsjr\" is forbidden: User \"system:node:ip-172-31-30-184\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-30-184' and this object" Jan 23 23:57:10.692228 systemd[1]: Created slice kubepods-besteffort-podeab42098_2c31_4390_ab72_1652dff74b60.slice - libcontainer container kubepods-besteffort-podeab42098_2c31_4390_ab72_1652dff74b60.slice. Jan 23 23:57:10.778576 kubelet[3494]: I0123 23:57:10.778418 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eab42098-2c31-4390-ab72-1652dff74b60-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2lsjr\" (UID: \"eab42098-2c31-4390-ab72-1652dff74b60\") " pod="kube-system/cilium-operator-6c4d7847fc-2lsjr" Jan 23 23:57:10.778576 kubelet[3494]: I0123 23:57:10.778501 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8trv\" (UniqueName: \"kubernetes.io/projected/eab42098-2c31-4390-ab72-1652dff74b60-kube-api-access-m8trv\") pod \"cilium-operator-6c4d7847fc-2lsjr\" (UID: \"eab42098-2c31-4390-ab72-1652dff74b60\") " pod="kube-system/cilium-operator-6c4d7847fc-2lsjr" Jan 23 23:57:11.625104 kubelet[3494]: E0123 23:57:11.624661 3494 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:57:11.625104 kubelet[3494]: E0123 23:57:11.624713 3494 projected.go:194] Error preparing data for projected volume kube-api-access-m6q8l for pod kube-system/cilium-ttrwj: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:57:11.625104 kubelet[3494]: E0123 23:57:11.624807 3494 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-kube-api-access-m6q8l podName:22027dd0-325f-4f76-bb82-619d4ce38ab7 nodeName:}" failed. No retries permitted until 2026-01-23 23:57:12.124778305 +0000 UTC m=+6.794834521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m6q8l" (UniqueName: "kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-kube-api-access-m6q8l") pod "cilium-ttrwj" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:57:11.632280 kubelet[3494]: E0123 23:57:11.631425 3494 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:57:11.632280 kubelet[3494]: E0123 23:57:11.631473 3494 projected.go:194] Error preparing data for projected volume kube-api-access-5mcr5 for pod kube-system/kube-proxy-qphcq: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:57:11.632280 kubelet[3494]: E0123 23:57:11.631548 3494 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb-kube-api-access-5mcr5 podName:7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb nodeName:}" failed. No retries permitted until 2026-01-23 23:57:12.131518606 +0000 UTC m=+6.801574834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5mcr5" (UniqueName: "kubernetes.io/projected/7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb-kube-api-access-5mcr5") pod "kube-proxy-qphcq" (UID: "7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:57:11.901326 containerd[2034]: time="2026-01-23T23:57:11.900596408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2lsjr,Uid:eab42098-2c31-4390-ab72-1652dff74b60,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:11.956173 containerd[2034]: time="2026-01-23T23:57:11.955939229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:11.956395 containerd[2034]: time="2026-01-23T23:57:11.956101779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:11.956395 containerd[2034]: time="2026-01-23T23:57:11.956257677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:11.957498 containerd[2034]: time="2026-01-23T23:57:11.957388690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.002338 systemd[1]: Started cri-containerd-d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1.scope - libcontainer container d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1. Jan 23 23:57:12.078779 containerd[2034]: time="2026-01-23T23:57:12.078705251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2lsjr,Uid:eab42098-2c31-4390-ab72-1652dff74b60,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\"" Jan 23 23:57:12.082333 containerd[2034]: time="2026-01-23T23:57:12.082271165Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 23:57:12.217203 containerd[2034]: time="2026-01-23T23:57:12.217044159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qphcq,Uid:7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:12.250595 containerd[2034]: time="2026-01-23T23:57:12.250336965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttrwj,Uid:22027dd0-325f-4f76-bb82-619d4ce38ab7,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:12.269659 containerd[2034]: time="2026-01-23T23:57:12.269297765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:12.269659 containerd[2034]: time="2026-01-23T23:57:12.269391976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:12.269659 containerd[2034]: time="2026-01-23T23:57:12.269418473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.269659 containerd[2034]: time="2026-01-23T23:57:12.269581466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.300584 systemd[1]: Started cri-containerd-4cba62b4bb70f2399d39c47f0a99d4599fd911f333dccafa144030975db46faa.scope - libcontainer container 4cba62b4bb70f2399d39c47f0a99d4599fd911f333dccafa144030975db46faa. Jan 23 23:57:12.325473 containerd[2034]: time="2026-01-23T23:57:12.325092396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:12.325473 containerd[2034]: time="2026-01-23T23:57:12.325285297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:12.325473 containerd[2034]: time="2026-01-23T23:57:12.325348220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.325740 containerd[2034]: time="2026-01-23T23:57:12.325557137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.386478 systemd[1]: Started cri-containerd-52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5.scope - libcontainer container 52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5. Jan 23 23:57:12.388393 containerd[2034]: time="2026-01-23T23:57:12.388314396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qphcq,Uid:7d3cee9d-2d47-4bfe-8c38-36a9dd95f3eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cba62b4bb70f2399d39c47f0a99d4599fd911f333dccafa144030975db46faa\"" Jan 23 23:57:12.397145 containerd[2034]: time="2026-01-23T23:57:12.397072317Z" level=info msg="CreateContainer within sandbox \"4cba62b4bb70f2399d39c47f0a99d4599fd911f333dccafa144030975db46faa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:57:12.442604 containerd[2034]: time="2026-01-23T23:57:12.442414873Z" level=info msg="CreateContainer within sandbox \"4cba62b4bb70f2399d39c47f0a99d4599fd911f333dccafa144030975db46faa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4bb0d333a5744b60d75c9f6becbf0e4df1314ca4c12b1cd0103cd5f17dd85c4\"" Jan 23 23:57:12.443441 containerd[2034]: time="2026-01-23T23:57:12.443394515Z" level=info msg="StartContainer for \"b4bb0d333a5744b60d75c9f6becbf0e4df1314ca4c12b1cd0103cd5f17dd85c4\"" Jan 23 23:57:12.459847 containerd[2034]: time="2026-01-23T23:57:12.459694101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttrwj,Uid:22027dd0-325f-4f76-bb82-619d4ce38ab7,Namespace:kube-system,Attempt:0,} returns sandbox id \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\"" Jan 23 23:57:12.507916 systemd[1]: Started cri-containerd-b4bb0d333a5744b60d75c9f6becbf0e4df1314ca4c12b1cd0103cd5f17dd85c4.scope - libcontainer container b4bb0d333a5744b60d75c9f6becbf0e4df1314ca4c12b1cd0103cd5f17dd85c4. Jan 23 23:57:12.567538 containerd[2034]: time="2026-01-23T23:57:12.567463943Z" level=info msg="StartContainer for \"b4bb0d333a5744b60d75c9f6becbf0e4df1314ca4c12b1cd0103cd5f17dd85c4\" returns successfully" Jan 23 23:57:13.195993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651069726.mount: Deactivated successfully. Jan 23 23:57:13.929795 containerd[2034]: time="2026-01-23T23:57:13.929713673Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:13.932675 containerd[2034]: time="2026-01-23T23:57:13.932617769Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 23:57:13.935489 containerd[2034]: time="2026-01-23T23:57:13.935427341Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:13.939182 containerd[2034]: time="2026-01-23T23:57:13.939127878Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.856793382s" Jan 23 23:57:13.939295 containerd[2034]: time="2026-01-23T23:57:13.939188568Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 23:57:13.940882 containerd[2034]: time="2026-01-23T23:57:13.940825683Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 23:57:13.946795 containerd[2034]: time="2026-01-23T23:57:13.946525075Z" level=info msg="CreateContainer within sandbox \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 23:57:13.979337 containerd[2034]: time="2026-01-23T23:57:13.979260827Z" level=info msg="CreateContainer within sandbox \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\"" Jan 23 23:57:13.980396 containerd[2034]: time="2026-01-23T23:57:13.980320212Z" level=info msg="StartContainer for \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\"" Jan 23 23:57:14.032586 systemd[1]: run-containerd-runc-k8s.io-2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734-runc.C3OirH.mount: Deactivated successfully. Jan 23 23:57:14.052332 systemd[1]: Started cri-containerd-2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734.scope - libcontainer container 2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734. Jan 23 23:57:14.102850 containerd[2034]: time="2026-01-23T23:57:14.102777523Z" level=info msg="StartContainer for \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\" returns successfully" Jan 23 23:57:14.910602 kubelet[3494]: I0123 23:57:14.910497 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qphcq" podStartSLOduration=4.907747881 podStartE2EDuration="4.907747881s" podCreationTimestamp="2026-01-23 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:12.750710574 +0000 UTC m=+7.420766826" watchObservedRunningTime="2026-01-23 23:57:14.907747881 +0000 UTC m=+9.577804121" Jan 23 23:57:14.914264 kubelet[3494]: I0123 23:57:14.910878 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2lsjr" podStartSLOduration=3.051647953 podStartE2EDuration="4.910862753s" podCreationTimestamp="2026-01-23 23:57:10 +0000 UTC" firstStartedPulling="2026-01-23 23:57:12.081406661 +0000 UTC m=+6.751462889" lastFinishedPulling="2026-01-23 23:57:13.94062146 +0000 UTC m=+8.610677689" observedRunningTime="2026-01-23 23:57:14.910761411 +0000 UTC m=+9.580817663" watchObservedRunningTime="2026-01-23 23:57:14.910862753 +0000 UTC m=+9.580918981" Jan 23 23:57:19.708580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009606570.mount: Deactivated successfully. Jan 23 23:57:22.454205 containerd[2034]: time="2026-01-23T23:57:22.454110817Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:22.456285 containerd[2034]: time="2026-01-23T23:57:22.455960834Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 23:57:22.458340 containerd[2034]: time="2026-01-23T23:57:22.458287536Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:22.464204 containerd[2034]: time="2026-01-23T23:57:22.464136644Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.522996692s" Jan 23 23:57:22.464754 containerd[2034]: time="2026-01-23T23:57:22.464199927Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 23:57:22.469771 containerd[2034]: time="2026-01-23T23:57:22.469690187Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:57:22.496807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486914861.mount: Deactivated successfully. Jan 23 23:57:22.500726 containerd[2034]: time="2026-01-23T23:57:22.500642183Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\"" Jan 23 23:57:22.501531 containerd[2034]: time="2026-01-23T23:57:22.501465663Z" level=info msg="StartContainer for \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\"" Jan 23 23:57:22.567496 systemd[1]: Started cri-containerd-b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25.scope - libcontainer container b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25. Jan 23 23:57:22.615820 containerd[2034]: time="2026-01-23T23:57:22.615729887Z" level=info msg="StartContainer for \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\" returns successfully" Jan 23 23:57:22.645735 systemd[1]: cri-containerd-b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25.scope: Deactivated successfully. Jan 23 23:57:23.259284 containerd[2034]: time="2026-01-23T23:57:23.259137207Z" level=info msg="shim disconnected" id=b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25 namespace=k8s.io Jan 23 23:57:23.259602 containerd[2034]: time="2026-01-23T23:57:23.259571476Z" level=warning msg="cleaning up after shim disconnected" id=b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25 namespace=k8s.io Jan 23 23:57:23.259730 containerd[2034]: time="2026-01-23T23:57:23.259703842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:23.491891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25-rootfs.mount: Deactivated successfully. Jan 23 23:57:23.777594 containerd[2034]: time="2026-01-23T23:57:23.777222632Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:57:23.819751 containerd[2034]: time="2026-01-23T23:57:23.819669449Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\"" Jan 23 23:57:23.820649 containerd[2034]: time="2026-01-23T23:57:23.820492052Z" level=info msg="StartContainer for \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\"" Jan 23 23:57:23.878319 systemd[1]: Started cri-containerd-208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb.scope - libcontainer container 208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb. Jan 23 23:57:23.930453 containerd[2034]: time="2026-01-23T23:57:23.930241420Z" level=info msg="StartContainer for \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\" returns successfully" Jan 23 23:57:23.957488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:57:23.957989 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:23.958728 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:57:23.967435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:57:23.968993 systemd[1]: cri-containerd-208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb.scope: Deactivated successfully. Jan 23 23:57:24.020189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:57:24.031796 containerd[2034]: time="2026-01-23T23:57:24.031374838Z" level=info msg="shim disconnected" id=208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb namespace=k8s.io Jan 23 23:57:24.031796 containerd[2034]: time="2026-01-23T23:57:24.031447967Z" level=warning msg="cleaning up after shim disconnected" id=208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb namespace=k8s.io Jan 23 23:57:24.031796 containerd[2034]: time="2026-01-23T23:57:24.031468365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:24.492189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb-rootfs.mount: Deactivated successfully. Jan 23 23:57:24.783330 containerd[2034]: time="2026-01-23T23:57:24.783250437Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:57:24.836824 containerd[2034]: time="2026-01-23T23:57:24.836580465Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\"" Jan 23 23:57:24.838091 containerd[2034]: time="2026-01-23T23:57:24.837986968Z" level=info msg="StartContainer for \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\"" Jan 23 23:57:24.899316 systemd[1]: Started cri-containerd-512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f.scope - libcontainer container 512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f. Jan 23 23:57:24.954201 containerd[2034]: time="2026-01-23T23:57:24.954130972Z" level=info msg="StartContainer for \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\" returns successfully" Jan 23 23:57:24.967282 systemd[1]: cri-containerd-512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f.scope: Deactivated successfully. Jan 23 23:57:25.018243 containerd[2034]: time="2026-01-23T23:57:25.018147108Z" level=info msg="shim disconnected" id=512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f namespace=k8s.io Jan 23 23:57:25.018243 containerd[2034]: time="2026-01-23T23:57:25.018231343Z" level=warning msg="cleaning up after shim disconnected" id=512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f namespace=k8s.io Jan 23 23:57:25.018550 containerd[2034]: time="2026-01-23T23:57:25.018254310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:25.492082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f-rootfs.mount: Deactivated successfully. Jan 23 23:57:25.790762 containerd[2034]: time="2026-01-23T23:57:25.790523757Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:57:25.821066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009332570.mount: Deactivated successfully. Jan 23 23:57:25.830322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200585563.mount: Deactivated successfully. Jan 23 23:57:25.845733 containerd[2034]: time="2026-01-23T23:57:25.845554687Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\"" Jan 23 23:57:25.848037 containerd[2034]: time="2026-01-23T23:57:25.846659179Z" level=info msg="StartContainer for \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\"" Jan 23 23:57:25.891386 systemd[1]: Started cri-containerd-5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b.scope - libcontainer container 5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b. Jan 23 23:57:25.944241 systemd[1]: cri-containerd-5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b.scope: Deactivated successfully. Jan 23 23:57:25.951559 containerd[2034]: time="2026-01-23T23:57:25.950663676Z" level=info msg="StartContainer for \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\" returns successfully" Jan 23 23:57:25.992487 containerd[2034]: time="2026-01-23T23:57:25.992402163Z" level=info msg="shim disconnected" id=5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b namespace=k8s.io Jan 23 23:57:25.993181 containerd[2034]: time="2026-01-23T23:57:25.992683164Z" level=warning msg="cleaning up after shim disconnected" id=5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b namespace=k8s.io Jan 23 23:57:25.993181 containerd[2034]: time="2026-01-23T23:57:25.992708604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:26.798127 containerd[2034]: time="2026-01-23T23:57:26.798060273Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:57:26.851247 containerd[2034]: time="2026-01-23T23:57:26.850899363Z" level=info msg="CreateContainer within sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\"" Jan 23 23:57:26.852508 containerd[2034]: time="2026-01-23T23:57:26.852307595Z" level=info msg="StartContainer for \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\"" Jan 23 23:57:26.916326 systemd[1]: Started cri-containerd-737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59.scope - libcontainer container 737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59. Jan 23 23:57:26.981270 containerd[2034]: time="2026-01-23T23:57:26.981150909Z" level=info msg="StartContainer for \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\" returns successfully" Jan 23 23:57:27.183818 kubelet[3494]: I0123 23:57:27.183597 3494 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:57:27.253878 systemd[1]: Created slice kubepods-burstable-pod514c170a_7257_453f_894f_e1613f52e0a7.slice - libcontainer container kubepods-burstable-pod514c170a_7257_453f_894f_e1613f52e0a7.slice. Jan 23 23:57:27.268954 systemd[1]: Created slice kubepods-burstable-pod26579e5d_a97f_4f50_a3c4_f46c6417febf.slice - libcontainer container kubepods-burstable-pod26579e5d_a97f_4f50_a3c4_f46c6417febf.slice. Jan 23 23:57:27.320695 kubelet[3494]: I0123 23:57:27.320526 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lncz9\" (UniqueName: \"kubernetes.io/projected/26579e5d-a97f-4f50-a3c4-f46c6417febf-kube-api-access-lncz9\") pod \"coredns-668d6bf9bc-pzbh5\" (UID: \"26579e5d-a97f-4f50-a3c4-f46c6417febf\") " pod="kube-system/coredns-668d6bf9bc-pzbh5" Jan 23 23:57:27.320695 kubelet[3494]: I0123 23:57:27.320604 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/514c170a-7257-453f-894f-e1613f52e0a7-config-volume\") pod \"coredns-668d6bf9bc-jrjjb\" (UID: \"514c170a-7257-453f-894f-e1613f52e0a7\") " pod="kube-system/coredns-668d6bf9bc-jrjjb" Jan 23 23:57:27.320695 kubelet[3494]: I0123 23:57:27.320645 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9gv\" (UniqueName: \"kubernetes.io/projected/514c170a-7257-453f-894f-e1613f52e0a7-kube-api-access-8d9gv\") pod \"coredns-668d6bf9bc-jrjjb\" (UID: \"514c170a-7257-453f-894f-e1613f52e0a7\") " pod="kube-system/coredns-668d6bf9bc-jrjjb" Jan 23 23:57:27.321076 kubelet[3494]: I0123 23:57:27.320711 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26579e5d-a97f-4f50-a3c4-f46c6417febf-config-volume\") pod \"coredns-668d6bf9bc-pzbh5\" (UID: \"26579e5d-a97f-4f50-a3c4-f46c6417febf\") " pod="kube-system/coredns-668d6bf9bc-pzbh5" Jan 23 23:57:27.505574 systemd[1]: run-containerd-runc-k8s.io-737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59-runc.3bGCde.mount: Deactivated successfully. Jan 23 23:57:27.564438 containerd[2034]: time="2026-01-23T23:57:27.563893843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jrjjb,Uid:514c170a-7257-453f-894f-e1613f52e0a7,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:27.580502 containerd[2034]: time="2026-01-23T23:57:27.580429887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzbh5,Uid:26579e5d-a97f-4f50-a3c4-f46c6417febf,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:30.042224 (udev-worker)[4296]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:30.049634 systemd-networkd[1917]: cilium_host: Link UP Jan 23 23:57:30.050452 systemd-networkd[1917]: cilium_net: Link UP Jan 23 23:57:30.050815 systemd-networkd[1917]: cilium_net: Gained carrier Jan 23 23:57:30.053541 systemd-networkd[1917]: cilium_host: Gained carrier Jan 23 23:57:30.054699 (udev-worker)[4329]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:30.236225 systemd-networkd[1917]: cilium_vxlan: Link UP Jan 23 23:57:30.236245 systemd-networkd[1917]: cilium_vxlan: Gained carrier Jan 23 23:57:30.838064 kernel: NET: Registered PF_ALG protocol family Jan 23 23:57:30.914345 systemd-networkd[1917]: cilium_net: Gained IPv6LL Jan 23 23:57:30.978355 systemd-networkd[1917]: cilium_host: Gained IPv6LL Jan 23 23:57:32.066310 systemd-networkd[1917]: cilium_vxlan: Gained IPv6LL Jan 23 23:57:32.185895 systemd-networkd[1917]: lxc_health: Link UP Jan 23 23:57:32.199531 systemd-networkd[1917]: lxc_health: Gained carrier Jan 23 23:57:32.289752 kubelet[3494]: I0123 23:57:32.289621 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ttrwj" podStartSLOduration=12.284677774 podStartE2EDuration="22.28959543s" podCreationTimestamp="2026-01-23 23:57:10 +0000 UTC" firstStartedPulling="2026-01-23 23:57:12.462421912 +0000 UTC m=+7.132478152" lastFinishedPulling="2026-01-23 23:57:22.46733958 +0000 UTC m=+17.137395808" observedRunningTime="2026-01-23 23:57:27.847796294 +0000 UTC m=+22.517852546" watchObservedRunningTime="2026-01-23 23:57:32.28959543 +0000 UTC m=+26.959651694" Jan 23 23:57:32.710574 systemd-networkd[1917]: lxcd99f28f5c79c: Link UP Jan 23 23:57:32.720072 kernel: eth0: renamed from tmp5535a Jan 23 23:57:32.731224 systemd-networkd[1917]: lxcd99f28f5c79c: Gained carrier Jan 23 23:57:32.763881 (udev-worker)[4662]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:32.765092 systemd-networkd[1917]: lxcb375961a795c: Link UP Jan 23 23:57:32.772735 kernel: eth0: renamed from tmp8ca28 Jan 23 23:57:32.781808 systemd-networkd[1917]: lxcb375961a795c: Gained carrier Jan 23 23:57:34.242329 systemd-networkd[1917]: lxc_health: Gained IPv6LL Jan 23 23:57:34.244253 systemd-networkd[1917]: lxcb375961a795c: Gained IPv6LL Jan 23 23:57:34.754285 systemd-networkd[1917]: lxcd99f28f5c79c: Gained IPv6LL Jan 23 23:57:37.581403 ntpd[2000]: Listen normally on 7 cilium_host 192.168.0.145:123 Jan 23 23:57:37.582289 ntpd[2000]: 23 Jan 23:57:37 ntpd[2000]: Listen normally on 7 cilium_host 192.168.0.145:123 Jan 23 23:57:37.582289 ntpd[2000]: 23 Jan 23:57:37 ntpd[2000]: Listen normally on 8 cilium_net [fe80::8c3a:21ff:fea5:76d7%4]:123 Jan 23 23:57:37.582289 ntpd[2000]: 23 Jan 23:57:37 ntpd[2000]: Listen normally on 9 cilium_host [fe80::c053:1fff:fe0f:a37f%5]:123 Jan 23 23:57:37.582289 ntpd[2000]: 23 Jan 23:57:37 ntpd[2000]: Listen normally on 10 cilium_vxlan [fe80::c4a4:3ff:fe6d:5fa5%6]:123 Jan 23 23:57:37.582289 ntpd[2000]: 23 Jan 23:57:37 ntpd[2000]: Listen normally on 11 lxc_health [fe80::64f9:74ff:fe5d:ee71%8]:123 Jan 23 23:57:37.582289 ntpd[2000]: 23 Jan 23:57:37 ntpd[2000]: Listen normally on 12 lxcd99f28f5c79c [fe80::3c1b:17ff:fe3e:7bb%10]:123 Jan 23 23:57:37.582289 ntpd[2000]: 23 Jan 23:57:37 ntpd[2000]: Listen normally on 13 lxcb375961a795c [fe80::e0e5:c2ff:fe71:12f2%12]:123 Jan 23 23:57:37.581542 ntpd[2000]: Listen normally on 8 cilium_net [fe80::8c3a:21ff:fea5:76d7%4]:123 Jan 23 23:57:37.581646 ntpd[2000]: Listen normally on 9 cilium_host [fe80::c053:1fff:fe0f:a37f%5]:123 Jan 23 23:57:37.581726 ntpd[2000]: Listen normally on 10 cilium_vxlan [fe80::c4a4:3ff:fe6d:5fa5%6]:123 Jan 23 23:57:37.581796 ntpd[2000]: Listen normally on 11 lxc_health [fe80::64f9:74ff:fe5d:ee71%8]:123 Jan 23 23:57:37.581863 ntpd[2000]: Listen normally on 12 lxcd99f28f5c79c [fe80::3c1b:17ff:fe3e:7bb%10]:123 Jan 23 23:57:37.581934 ntpd[2000]: Listen normally on 13 lxcb375961a795c [fe80::e0e5:c2ff:fe71:12f2%12]:123 Jan 23 23:57:41.225831 containerd[2034]: time="2026-01-23T23:57:41.225577383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:41.227418 containerd[2034]: time="2026-01-23T23:57:41.226527153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:41.227418 containerd[2034]: time="2026-01-23T23:57:41.226610103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:41.227418 containerd[2034]: time="2026-01-23T23:57:41.226791213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:41.274397 containerd[2034]: time="2026-01-23T23:57:41.273735178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:41.274397 containerd[2034]: time="2026-01-23T23:57:41.273975778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:41.275524 containerd[2034]: time="2026-01-23T23:57:41.275236480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:41.275524 containerd[2034]: time="2026-01-23T23:57:41.275424818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:41.331358 systemd[1]: Started cri-containerd-5535a40b4ef07017196632f58dae9a62559c48e898a1867e9d48eb5aa282eef3.scope - libcontainer container 5535a40b4ef07017196632f58dae9a62559c48e898a1867e9d48eb5aa282eef3. Jan 23 23:57:41.341285 systemd[1]: Started cri-containerd-8ca28cc1a784b713d41483eafffb4d1e8f4ba91170479ba4fc97d627b26b5eb7.scope - libcontainer container 8ca28cc1a784b713d41483eafffb4d1e8f4ba91170479ba4fc97d627b26b5eb7. Jan 23 23:57:41.452370 containerd[2034]: time="2026-01-23T23:57:41.452302638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzbh5,Uid:26579e5d-a97f-4f50-a3c4-f46c6417febf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5535a40b4ef07017196632f58dae9a62559c48e898a1867e9d48eb5aa282eef3\"" Jan 23 23:57:41.463339 containerd[2034]: time="2026-01-23T23:57:41.462576256Z" level=info msg="CreateContainer within sandbox \"5535a40b4ef07017196632f58dae9a62559c48e898a1867e9d48eb5aa282eef3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:41.503348 containerd[2034]: time="2026-01-23T23:57:41.502866364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jrjjb,Uid:514c170a-7257-453f-894f-e1613f52e0a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ca28cc1a784b713d41483eafffb4d1e8f4ba91170479ba4fc97d627b26b5eb7\"" Jan 23 23:57:41.522919 containerd[2034]: time="2026-01-23T23:57:41.522755311Z" level=info msg="CreateContainer within sandbox \"5535a40b4ef07017196632f58dae9a62559c48e898a1867e9d48eb5aa282eef3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aceedc2a263e6a00e5412291a70b7f930977842a01c7b798caecaeadfff976b4\"" Jan 23 23:57:41.528167 containerd[2034]: time="2026-01-23T23:57:41.526886000Z" level=info msg="CreateContainer within sandbox \"8ca28cc1a784b713d41483eafffb4d1e8f4ba91170479ba4fc97d627b26b5eb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:41.530736 containerd[2034]: time="2026-01-23T23:57:41.529572534Z" level=info msg="StartContainer for \"aceedc2a263e6a00e5412291a70b7f930977842a01c7b798caecaeadfff976b4\"" Jan 23 23:57:41.572945 containerd[2034]: time="2026-01-23T23:57:41.572871933Z" level=info msg="CreateContainer within sandbox \"8ca28cc1a784b713d41483eafffb4d1e8f4ba91170479ba4fc97d627b26b5eb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6314d7255fecd01d08abcc5da7fb3ed9a2fe2d70538f4d08049a1e6dce1f6c85\"" Jan 23 23:57:41.574254 containerd[2034]: time="2026-01-23T23:57:41.574188079Z" level=info msg="StartContainer for \"6314d7255fecd01d08abcc5da7fb3ed9a2fe2d70538f4d08049a1e6dce1f6c85\"" Jan 23 23:57:41.643966 systemd[1]: Started cri-containerd-aceedc2a263e6a00e5412291a70b7f930977842a01c7b798caecaeadfff976b4.scope - libcontainer container aceedc2a263e6a00e5412291a70b7f930977842a01c7b798caecaeadfff976b4. Jan 23 23:57:41.663361 systemd[1]: Started cri-containerd-6314d7255fecd01d08abcc5da7fb3ed9a2fe2d70538f4d08049a1e6dce1f6c85.scope - libcontainer container 6314d7255fecd01d08abcc5da7fb3ed9a2fe2d70538f4d08049a1e6dce1f6c85. Jan 23 23:57:41.750664 containerd[2034]: time="2026-01-23T23:57:41.750479178Z" level=info msg="StartContainer for \"aceedc2a263e6a00e5412291a70b7f930977842a01c7b798caecaeadfff976b4\" returns successfully" Jan 23 23:57:41.775884 containerd[2034]: time="2026-01-23T23:57:41.775707326Z" level=info msg="StartContainer for \"6314d7255fecd01d08abcc5da7fb3ed9a2fe2d70538f4d08049a1e6dce1f6c85\" returns successfully" Jan 23 23:57:41.961945 kubelet[3494]: I0123 23:57:41.961824 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jrjjb" podStartSLOduration=31.961799379 podStartE2EDuration="31.961799379s" podCreationTimestamp="2026-01-23 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:41.960898388 +0000 UTC m=+36.630954640" watchObservedRunningTime="2026-01-23 23:57:41.961799379 +0000 UTC m=+36.631855619" Jan 23 23:57:41.963407 kubelet[3494]: I0123 23:57:41.962168 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pzbh5" podStartSLOduration=31.962157013 podStartE2EDuration="31.962157013s" podCreationTimestamp="2026-01-23 23:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:41.92088586 +0000 UTC m=+36.590942124" watchObservedRunningTime="2026-01-23 23:57:41.962157013 +0000 UTC m=+36.632213265" Jan 23 23:57:42.241136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618265946.mount: Deactivated successfully. Jan 23 23:57:47.524716 systemd[1]: Started sshd@7-172.31.30.184:22-4.153.228.146:60776.service - OpenSSH per-connection server daemon (4.153.228.146:60776). Jan 23 23:57:48.039979 sshd[4873]: Accepted publickey for core from 4.153.228.146 port 60776 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:48.042824 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:48.051802 systemd-logind[2007]: New session 8 of user core. Jan 23 23:57:48.059283 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:57:48.532219 sshd[4873]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:48.538237 systemd[1]: sshd@7-172.31.30.184:22-4.153.228.146:60776.service: Deactivated successfully. Jan 23 23:57:48.545716 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:57:48.549612 systemd-logind[2007]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:57:48.553382 systemd-logind[2007]: Removed session 8. Jan 23 23:57:53.648527 systemd[1]: Started sshd@8-172.31.30.184:22-4.153.228.146:60790.service - OpenSSH per-connection server daemon (4.153.228.146:60790). Jan 23 23:57:54.176756 sshd[4887]: Accepted publickey for core from 4.153.228.146 port 60790 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:54.179449 sshd[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:54.188406 systemd-logind[2007]: New session 9 of user core. Jan 23 23:57:54.194280 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:57:54.673614 sshd[4887]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:54.680205 systemd-logind[2007]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:57:54.681860 systemd[1]: sshd@8-172.31.30.184:22-4.153.228.146:60790.service: Deactivated successfully. Jan 23 23:57:54.686513 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:57:54.692452 systemd-logind[2007]: Removed session 9. Jan 23 23:57:59.769572 systemd[1]: Started sshd@9-172.31.30.184:22-4.153.228.146:34798.service - OpenSSH per-connection server daemon (4.153.228.146:34798). Jan 23 23:58:00.282184 sshd[4901]: Accepted publickey for core from 4.153.228.146 port 34798 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:00.284960 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:00.294543 systemd-logind[2007]: New session 10 of user core. Jan 23 23:58:00.302344 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:58:00.757582 sshd[4901]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:00.764896 systemd[1]: sshd@9-172.31.30.184:22-4.153.228.146:34798.service: Deactivated successfully. Jan 23 23:58:00.769998 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:58:00.772600 systemd-logind[2007]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:58:00.774931 systemd-logind[2007]: Removed session 10. Jan 23 23:58:05.853515 systemd[1]: Started sshd@10-172.31.30.184:22-4.153.228.146:45950.service - OpenSSH per-connection server daemon (4.153.228.146:45950). Jan 23 23:58:06.374180 sshd[4917]: Accepted publickey for core from 4.153.228.146 port 45950 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:06.376802 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:06.385454 systemd-logind[2007]: New session 11 of user core. Jan 23 23:58:06.393324 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:58:06.856395 sshd[4917]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:06.861987 systemd-logind[2007]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:58:06.863361 systemd[1]: sshd@10-172.31.30.184:22-4.153.228.146:45950.service: Deactivated successfully. Jan 23 23:58:06.868945 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:58:06.873051 systemd-logind[2007]: Removed session 11. Jan 23 23:58:06.953521 systemd[1]: Started sshd@11-172.31.30.184:22-4.153.228.146:45954.service - OpenSSH per-connection server daemon (4.153.228.146:45954). Jan 23 23:58:07.448969 sshd[4931]: Accepted publickey for core from 4.153.228.146 port 45954 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:07.451853 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:07.460693 systemd-logind[2007]: New session 12 of user core. Jan 23 23:58:07.469282 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:58:08.012373 sshd[4931]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:08.018969 systemd[1]: sshd@11-172.31.30.184:22-4.153.228.146:45954.service: Deactivated successfully. Jan 23 23:58:08.022567 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:58:08.024967 systemd-logind[2007]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:58:08.026999 systemd-logind[2007]: Removed session 12. Jan 23 23:58:08.108698 systemd[1]: Started sshd@12-172.31.30.184:22-4.153.228.146:45966.service - OpenSSH per-connection server daemon (4.153.228.146:45966). Jan 23 23:58:08.614158 sshd[4942]: Accepted publickey for core from 4.153.228.146 port 45966 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:08.616815 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:08.625562 systemd-logind[2007]: New session 13 of user core. Jan 23 23:58:08.631360 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:58:09.084174 sshd[4942]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:09.090466 systemd[1]: sshd@12-172.31.30.184:22-4.153.228.146:45966.service: Deactivated successfully. Jan 23 23:58:09.096903 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:58:09.099114 systemd-logind[2007]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:58:09.101558 systemd-logind[2007]: Removed session 13. Jan 23 23:58:14.183550 systemd[1]: Started sshd@13-172.31.30.184:22-4.153.228.146:45980.service - OpenSSH per-connection server daemon (4.153.228.146:45980). Jan 23 23:58:14.691780 sshd[4957]: Accepted publickey for core from 4.153.228.146 port 45980 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:14.693967 sshd[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:14.704743 systemd-logind[2007]: New session 14 of user core. Jan 23 23:58:14.711294 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:58:15.166557 sshd[4957]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:15.173267 systemd[1]: sshd@13-172.31.30.184:22-4.153.228.146:45980.service: Deactivated successfully. Jan 23 23:58:15.177927 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:58:15.179617 systemd-logind[2007]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:58:15.181366 systemd-logind[2007]: Removed session 14. Jan 23 23:58:20.272751 systemd[1]: Started sshd@14-172.31.30.184:22-4.153.228.146:47892.service - OpenSSH per-connection server daemon (4.153.228.146:47892). Jan 23 23:58:20.769111 sshd[4969]: Accepted publickey for core from 4.153.228.146 port 47892 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:20.771874 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:20.780998 systemd-logind[2007]: New session 15 of user core. Jan 23 23:58:20.785306 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:58:21.243161 sshd[4969]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:21.251396 systemd-logind[2007]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:58:21.252881 systemd[1]: sshd@14-172.31.30.184:22-4.153.228.146:47892.service: Deactivated successfully. Jan 23 23:58:21.257606 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:58:21.261669 systemd-logind[2007]: Removed session 15. Jan 23 23:58:26.340581 systemd[1]: Started sshd@15-172.31.30.184:22-4.153.228.146:49282.service - OpenSSH per-connection server daemon (4.153.228.146:49282). Jan 23 23:58:26.841738 sshd[4983]: Accepted publickey for core from 4.153.228.146 port 49282 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:26.844436 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:26.853315 systemd-logind[2007]: New session 16 of user core. Jan 23 23:58:26.860277 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:58:27.312086 sshd[4983]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:27.322453 systemd-logind[2007]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:58:27.324086 systemd[1]: sshd@15-172.31.30.184:22-4.153.228.146:49282.service: Deactivated successfully. Jan 23 23:58:27.330252 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:58:27.332155 systemd-logind[2007]: Removed session 16. Jan 23 23:58:27.401729 systemd[1]: Started sshd@16-172.31.30.184:22-4.153.228.146:49288.service - OpenSSH per-connection server daemon (4.153.228.146:49288). Jan 23 23:58:27.917325 sshd[4996]: Accepted publickey for core from 4.153.228.146 port 49288 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:27.920046 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:27.931903 systemd-logind[2007]: New session 17 of user core. Jan 23 23:58:27.939270 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:58:28.471121 sshd[4996]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:28.477838 systemd[1]: sshd@16-172.31.30.184:22-4.153.228.146:49288.service: Deactivated successfully. Jan 23 23:58:28.484390 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:58:28.486971 systemd-logind[2007]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:58:28.489269 systemd-logind[2007]: Removed session 17. Jan 23 23:58:28.576526 systemd[1]: Started sshd@17-172.31.30.184:22-4.153.228.146:49300.service - OpenSSH per-connection server daemon (4.153.228.146:49300). Jan 23 23:58:29.114228 sshd[5007]: Accepted publickey for core from 4.153.228.146 port 49300 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:29.116849 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:29.125352 systemd-logind[2007]: New session 18 of user core. Jan 23 23:58:29.131311 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:58:30.292519 sshd[5007]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:30.299832 systemd-logind[2007]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:58:30.301870 systemd[1]: sshd@17-172.31.30.184:22-4.153.228.146:49300.service: Deactivated successfully. Jan 23 23:58:30.305744 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:58:30.308616 systemd-logind[2007]: Removed session 18. Jan 23 23:58:30.384477 systemd[1]: Started sshd@18-172.31.30.184:22-4.153.228.146:49308.service - OpenSSH per-connection server daemon (4.153.228.146:49308). Jan 23 23:58:30.874850 sshd[5025]: Accepted publickey for core from 4.153.228.146 port 49308 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:30.877542 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:30.885949 systemd-logind[2007]: New session 19 of user core. Jan 23 23:58:30.897306 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:58:31.597355 sshd[5025]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:31.604686 systemd[1]: sshd@18-172.31.30.184:22-4.153.228.146:49308.service: Deactivated successfully. Jan 23 23:58:31.609556 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:58:31.612645 systemd-logind[2007]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:58:31.615382 systemd-logind[2007]: Removed session 19. Jan 23 23:58:31.699659 systemd[1]: Started sshd@19-172.31.30.184:22-4.153.228.146:49316.service - OpenSSH per-connection server daemon (4.153.228.146:49316). Jan 23 23:58:32.204097 sshd[5036]: Accepted publickey for core from 4.153.228.146 port 49316 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:32.206896 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:32.217471 systemd-logind[2007]: New session 20 of user core. Jan 23 23:58:32.228325 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:58:32.671366 sshd[5036]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:32.678100 systemd[1]: sshd@19-172.31.30.184:22-4.153.228.146:49316.service: Deactivated successfully. Jan 23 23:58:32.681753 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:58:32.684705 systemd-logind[2007]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:58:32.688698 systemd-logind[2007]: Removed session 20. Jan 23 23:58:37.787554 systemd[1]: Started sshd@20-172.31.30.184:22-4.153.228.146:37520.service - OpenSSH per-connection server daemon (4.153.228.146:37520). Jan 23 23:58:38.326330 sshd[5051]: Accepted publickey for core from 4.153.228.146 port 37520 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:38.328965 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:38.337027 systemd-logind[2007]: New session 21 of user core. Jan 23 23:58:38.346288 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:58:38.828415 sshd[5051]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:38.834750 systemd[1]: sshd@20-172.31.30.184:22-4.153.228.146:37520.service: Deactivated successfully. Jan 23 23:58:38.835193 systemd-logind[2007]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:58:38.840180 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:58:38.843818 systemd-logind[2007]: Removed session 21. Jan 23 23:58:43.931525 systemd[1]: Started sshd@21-172.31.30.184:22-4.153.228.146:37534.service - OpenSSH per-connection server daemon (4.153.228.146:37534). Jan 23 23:58:44.460684 sshd[5066]: Accepted publickey for core from 4.153.228.146 port 37534 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:44.463337 sshd[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:44.473162 systemd-logind[2007]: New session 22 of user core. Jan 23 23:58:44.481298 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:58:44.951921 sshd[5066]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:44.959124 systemd[1]: sshd@21-172.31.30.184:22-4.153.228.146:37534.service: Deactivated successfully. Jan 23 23:58:44.962934 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:58:44.964912 systemd-logind[2007]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:58:44.967890 systemd-logind[2007]: Removed session 22. Jan 23 23:58:50.043726 systemd[1]: Started sshd@22-172.31.30.184:22-4.153.228.146:54764.service - OpenSSH per-connection server daemon (4.153.228.146:54764). Jan 23 23:58:50.554170 sshd[5079]: Accepted publickey for core from 4.153.228.146 port 54764 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:50.556832 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:50.565439 systemd-logind[2007]: New session 23 of user core. Jan 23 23:58:50.573292 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:58:51.019203 sshd[5079]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:51.025649 systemd[1]: sshd@22-172.31.30.184:22-4.153.228.146:54764.service: Deactivated successfully. Jan 23 23:58:51.030849 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:58:51.032408 systemd-logind[2007]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:58:51.034199 systemd-logind[2007]: Removed session 23. Jan 23 23:58:51.125546 systemd[1]: Started sshd@23-172.31.30.184:22-4.153.228.146:54778.service - OpenSSH per-connection server daemon (4.153.228.146:54778). Jan 23 23:58:51.655809 sshd[5092]: Accepted publickey for core from 4.153.228.146 port 54778 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:51.658526 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:51.668100 systemd-logind[2007]: New session 24 of user core. Jan 23 23:58:51.670545 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:58:54.570563 containerd[2034]: time="2026-01-23T23:58:54.570488750Z" level=info msg="StopContainer for \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\" with timeout 30 (s)" Jan 23 23:58:54.572829 containerd[2034]: time="2026-01-23T23:58:54.572617473Z" level=info msg="Stop container \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\" with signal terminated" Jan 23 23:58:54.601580 systemd[1]: cri-containerd-2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734.scope: Deactivated successfully. Jan 23 23:58:54.606258 containerd[2034]: time="2026-01-23T23:58:54.606179154Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:58:54.633642 containerd[2034]: time="2026-01-23T23:58:54.633396864Z" level=info msg="StopContainer for \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\" with timeout 2 (s)" Jan 23 23:58:54.634646 containerd[2034]: time="2026-01-23T23:58:54.634299956Z" level=info msg="Stop container \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\" with signal terminated" Jan 23 23:58:54.662606 systemd-networkd[1917]: lxc_health: Link DOWN Jan 23 23:58:54.662628 systemd-networkd[1917]: lxc_health: Lost carrier Jan 23 23:58:54.677933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734-rootfs.mount: Deactivated successfully. Jan 23 23:58:54.692582 systemd[1]: cri-containerd-737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59.scope: Deactivated successfully. Jan 23 23:58:54.693471 systemd[1]: cri-containerd-737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59.scope: Consumed 14.655s CPU time. Jan 23 23:58:54.698358 containerd[2034]: time="2026-01-23T23:58:54.698106263Z" level=info msg="shim disconnected" id=2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734 namespace=k8s.io Jan 23 23:58:54.698974 containerd[2034]: time="2026-01-23T23:58:54.698472662Z" level=warning msg="cleaning up after shim disconnected" id=2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734 namespace=k8s.io Jan 23 23:58:54.698974 containerd[2034]: time="2026-01-23T23:58:54.698955172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:54.734658 containerd[2034]: time="2026-01-23T23:58:54.734458583Z" level=info msg="StopContainer for \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\" returns successfully" Jan 23 23:58:54.735881 containerd[2034]: time="2026-01-23T23:58:54.735590712Z" level=info msg="StopPodSandbox for \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\"" Jan 23 23:58:54.736163 containerd[2034]: time="2026-01-23T23:58:54.735832705Z" level=info msg="Container to stop \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:54.740211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1-shm.mount: Deactivated successfully. Jan 23 23:58:54.755581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59-rootfs.mount: Deactivated successfully. Jan 23 23:58:54.759571 systemd[1]: cri-containerd-d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1.scope: Deactivated successfully. Jan 23 23:58:54.772268 containerd[2034]: time="2026-01-23T23:58:54.772193201Z" level=info msg="shim disconnected" id=737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59 namespace=k8s.io Jan 23 23:58:54.772689 containerd[2034]: time="2026-01-23T23:58:54.772653631Z" level=warning msg="cleaning up after shim disconnected" id=737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59 namespace=k8s.io Jan 23 23:58:54.772855 containerd[2034]: time="2026-01-23T23:58:54.772826926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:54.807074 containerd[2034]: time="2026-01-23T23:58:54.807022775Z" level=info msg="StopContainer for \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\" returns successfully" Jan 23 23:58:54.808490 containerd[2034]: time="2026-01-23T23:58:54.808430046Z" level=info msg="StopPodSandbox for \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\"" Jan 23 23:58:54.808648 containerd[2034]: time="2026-01-23T23:58:54.808505972Z" level=info msg="Container to stop \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:54.808648 containerd[2034]: time="2026-01-23T23:58:54.808533286Z" level=info msg="Container to stop \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:54.808648 containerd[2034]: time="2026-01-23T23:58:54.808556458Z" level=info msg="Container to stop \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:54.808648 containerd[2034]: time="2026-01-23T23:58:54.808585716Z" level=info msg="Container to stop \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:54.808648 containerd[2034]: time="2026-01-23T23:58:54.808608300Z" level=info msg="Container to stop \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:54.816466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5-shm.mount: Deactivated successfully. Jan 23 23:58:54.824476 systemd[1]: cri-containerd-52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5.scope: Deactivated successfully. Jan 23 23:58:54.828991 containerd[2034]: time="2026-01-23T23:58:54.828530552Z" level=info msg="shim disconnected" id=d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1 namespace=k8s.io Jan 23 23:58:54.828991 containerd[2034]: time="2026-01-23T23:58:54.828608195Z" level=warning msg="cleaning up after shim disconnected" id=d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1 namespace=k8s.io Jan 23 23:58:54.828991 containerd[2034]: time="2026-01-23T23:58:54.828628749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:54.862593 containerd[2034]: time="2026-01-23T23:58:54.862466411Z" level=info msg="TearDown network for sandbox \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" successfully" Jan 23 23:58:54.862593 containerd[2034]: time="2026-01-23T23:58:54.862525121Z" level=info msg="StopPodSandbox for \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" returns successfully" Jan 23 23:58:54.892171 containerd[2034]: time="2026-01-23T23:58:54.892097832Z" level=info msg="shim disconnected" id=52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5 namespace=k8s.io Jan 23 23:58:54.896835 containerd[2034]: time="2026-01-23T23:58:54.895386528Z" level=warning msg="cleaning up after shim disconnected" id=52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5 namespace=k8s.io Jan 23 23:58:54.896835 containerd[2034]: time="2026-01-23T23:58:54.895557601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:54.920349 containerd[2034]: time="2026-01-23T23:58:54.920146586Z" level=info msg="TearDown network for sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" successfully" Jan 23 23:58:54.920349 containerd[2034]: time="2026-01-23T23:58:54.920195030Z" level=info msg="StopPodSandbox for \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" returns successfully" Jan 23 23:58:55.035291 kubelet[3494]: I0123 23:58:55.035213 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-run\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.035291 kubelet[3494]: I0123 23:58:55.035291 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-lib-modules\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.036304 kubelet[3494]: I0123 23:58:55.035334 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22027dd0-325f-4f76-bb82-619d4ce38ab7-clustermesh-secrets\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.036304 kubelet[3494]: I0123 23:58:55.035378 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8trv\" (UniqueName: \"kubernetes.io/projected/eab42098-2c31-4390-ab72-1652dff74b60-kube-api-access-m8trv\") pod \"eab42098-2c31-4390-ab72-1652dff74b60\" (UID: \"eab42098-2c31-4390-ab72-1652dff74b60\") " Jan 23 23:58:55.036304 kubelet[3494]: I0123 23:58:55.035420 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eab42098-2c31-4390-ab72-1652dff74b60-cilium-config-path\") pod \"eab42098-2c31-4390-ab72-1652dff74b60\" (UID: \"eab42098-2c31-4390-ab72-1652dff74b60\") " Jan 23 23:58:55.036304 kubelet[3494]: I0123 23:58:55.035455 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-bpf-maps\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.036304 kubelet[3494]: I0123 23:58:55.035488 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-xtables-lock\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.036304 kubelet[3494]: I0123 23:58:55.035523 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-net\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.036869 kubelet[3494]: I0123 23:58:55.035559 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cni-path\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.036869 kubelet[3494]: I0123 23:58:55.035594 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-etc-cni-netd\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.036869 kubelet[3494]: I0123 23:58:55.035630 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-config-path\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.039033 kubelet[3494]: I0123 23:58:55.037248 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-hubble-tls\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.039033 kubelet[3494]: I0123 23:58:55.037315 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-cgroup\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.039033 kubelet[3494]: I0123 23:58:55.037369 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6q8l\" (UniqueName: \"kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-kube-api-access-m6q8l\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.039033 kubelet[3494]: I0123 23:58:55.037407 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-hostproc\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.039033 kubelet[3494]: I0123 23:58:55.037442 3494 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-kernel\") pod \"22027dd0-325f-4f76-bb82-619d4ce38ab7\" (UID: \"22027dd0-325f-4f76-bb82-619d4ce38ab7\") " Jan 23 23:58:55.039033 kubelet[3494]: I0123 23:58:55.037565 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.039463 kubelet[3494]: I0123 23:58:55.037627 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.039463 kubelet[3494]: I0123 23:58:55.037666 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.039463 kubelet[3494]: I0123 23:58:55.037701 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.039463 kubelet[3494]: I0123 23:58:55.037736 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cni-path" (OuterVolumeSpecName: "cni-path") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.039463 kubelet[3494]: I0123 23:58:55.037770 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.046131 kubelet[3494]: I0123 23:58:55.046058 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eab42098-2c31-4390-ab72-1652dff74b60-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eab42098-2c31-4390-ab72-1652dff74b60" (UID: "eab42098-2c31-4390-ab72-1652dff74b60"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:58:55.046289 kubelet[3494]: I0123 23:58:55.046173 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.046289 kubelet[3494]: I0123 23:58:55.046215 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.047278 kubelet[3494]: I0123 23:58:55.047231 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:58:55.052368 kubelet[3494]: I0123 23:58:55.052317 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.056359 kubelet[3494]: I0123 23:58:55.056264 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22027dd0-325f-4f76-bb82-619d4ce38ab7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:58:55.059315 kubelet[3494]: I0123 23:58:55.059260 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-kube-api-access-m6q8l" (OuterVolumeSpecName: "kube-api-access-m6q8l") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "kube-api-access-m6q8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:55.059555 kubelet[3494]: I0123 23:58:55.059528 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-hostproc" (OuterVolumeSpecName: "hostproc") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:55.059770 kubelet[3494]: I0123 23:58:55.059743 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "22027dd0-325f-4f76-bb82-619d4ce38ab7" (UID: "22027dd0-325f-4f76-bb82-619d4ce38ab7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:55.065028 kubelet[3494]: I0123 23:58:55.064775 3494 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab42098-2c31-4390-ab72-1652dff74b60-kube-api-access-m8trv" (OuterVolumeSpecName: "kube-api-access-m8trv") pod "eab42098-2c31-4390-ab72-1652dff74b60" (UID: "eab42098-2c31-4390-ab72-1652dff74b60"). InnerVolumeSpecName "kube-api-access-m8trv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:55.067383 kubelet[3494]: I0123 23:58:55.067340 3494 scope.go:117] "RemoveContainer" containerID="2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734" Jan 23 23:58:55.074221 containerd[2034]: time="2026-01-23T23:58:55.073995205Z" level=info msg="RemoveContainer for \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\"" Jan 23 23:58:55.089383 containerd[2034]: time="2026-01-23T23:58:55.088904580Z" level=info msg="RemoveContainer for \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\" returns successfully" Jan 23 23:58:55.090346 kubelet[3494]: I0123 23:58:55.089846 3494 scope.go:117] "RemoveContainer" containerID="2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734" Jan 23 23:58:55.090496 containerd[2034]: time="2026-01-23T23:58:55.090224747Z" level=error msg="ContainerStatus for \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\": not found" Jan 23 23:58:55.092909 kubelet[3494]: E0123 23:58:55.092204 3494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\": not found" containerID="2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734" Jan 23 23:58:55.092909 kubelet[3494]: I0123 23:58:55.092262 3494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734"} err="failed to get container status \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c196dc57edfae8a5566ea476cd286320cb2683e2b2e51c5f1983ac731950734\": not found" Jan 23 23:58:55.092909 kubelet[3494]: I0123 23:58:55.092384 3494 scope.go:117] "RemoveContainer" containerID="737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59" Jan 23 23:58:55.093852 systemd[1]: Removed slice kubepods-besteffort-podeab42098_2c31_4390_ab72_1652dff74b60.slice - libcontainer container kubepods-besteffort-podeab42098_2c31_4390_ab72_1652dff74b60.slice. Jan 23 23:58:55.096766 containerd[2034]: time="2026-01-23T23:58:55.095502597Z" level=info msg="RemoveContainer for \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\"" Jan 23 23:58:55.105374 containerd[2034]: time="2026-01-23T23:58:55.105197826Z" level=info msg="RemoveContainer for \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\" returns successfully" Jan 23 23:58:55.108051 kubelet[3494]: I0123 23:58:55.107464 3494 scope.go:117] "RemoveContainer" containerID="5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b" Jan 23 23:58:55.112089 systemd[1]: Removed slice kubepods-burstable-pod22027dd0_325f_4f76_bb82_619d4ce38ab7.slice - libcontainer container kubepods-burstable-pod22027dd0_325f_4f76_bb82_619d4ce38ab7.slice. Jan 23 23:58:55.112493 containerd[2034]: time="2026-01-23T23:58:55.112447637Z" level=info msg="RemoveContainer for \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\"" Jan 23 23:58:55.112738 systemd[1]: kubepods-burstable-pod22027dd0_325f_4f76_bb82_619d4ce38ab7.slice: Consumed 14.813s CPU time. Jan 23 23:58:55.122621 containerd[2034]: time="2026-01-23T23:58:55.122446030Z" level=info msg="RemoveContainer for \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\" returns successfully" Jan 23 23:58:55.123046 kubelet[3494]: I0123 23:58:55.122977 3494 scope.go:117] "RemoveContainer" containerID="512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f" Jan 23 23:58:55.126736 containerd[2034]: time="2026-01-23T23:58:55.126684412Z" level=info msg="RemoveContainer for \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\"" Jan 23 23:58:55.137400 containerd[2034]: time="2026-01-23T23:58:55.137227157Z" level=info msg="RemoveContainer for \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\" returns successfully" Jan 23 23:58:55.137729 kubelet[3494]: I0123 23:58:55.137656 3494 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-cgroup\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.138045 kubelet[3494]: I0123 23:58:55.137831 3494 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cni-path\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.138045 kubelet[3494]: I0123 23:58:55.137859 3494 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-etc-cni-netd\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.138461 kubelet[3494]: I0123 23:58:55.137881 3494 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-config-path\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.138461 kubelet[3494]: I0123 23:58:55.138294 3494 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-hubble-tls\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.138461 kubelet[3494]: I0123 23:58:55.138322 3494 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6q8l\" (UniqueName: \"kubernetes.io/projected/22027dd0-325f-4f76-bb82-619d4ce38ab7-kube-api-access-m6q8l\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.138989 kubelet[3494]: I0123 23:58:55.138581 3494 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-hostproc\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.138989 kubelet[3494]: I0123 23:58:55.138614 3494 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-kernel\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.139431 kubelet[3494]: I0123 23:58:55.139068 3494 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-cilium-run\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.139431 kubelet[3494]: I0123 23:58:55.139099 3494 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-lib-modules\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.139431 kubelet[3494]: I0123 23:58:55.139137 3494 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22027dd0-325f-4f76-bb82-619d4ce38ab7-clustermesh-secrets\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.139431 kubelet[3494]: I0123 23:58:55.139074 3494 scope.go:117] "RemoveContainer" containerID="208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb" Jan 23 23:58:55.140593 kubelet[3494]: I0123 23:58:55.140362 3494 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m8trv\" (UniqueName: \"kubernetes.io/projected/eab42098-2c31-4390-ab72-1652dff74b60-kube-api-access-m8trv\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.140593 kubelet[3494]: I0123 23:58:55.140418 3494 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eab42098-2c31-4390-ab72-1652dff74b60-cilium-config-path\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.140593 kubelet[3494]: I0123 23:58:55.140446 3494 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-bpf-maps\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.140593 kubelet[3494]: I0123 23:58:55.140482 3494 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-xtables-lock\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.140593 kubelet[3494]: I0123 23:58:55.140507 3494 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22027dd0-325f-4f76-bb82-619d4ce38ab7-host-proc-sys-net\") on node \"ip-172-31-30-184\" DevicePath \"\"" Jan 23 23:58:55.152686 containerd[2034]: time="2026-01-23T23:58:55.152636030Z" level=info msg="RemoveContainer for \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\"" Jan 23 23:58:55.162594 containerd[2034]: time="2026-01-23T23:58:55.162537078Z" level=info msg="RemoveContainer for \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\" returns successfully" Jan 23 23:58:55.163166 kubelet[3494]: I0123 23:58:55.163133 3494 scope.go:117] "RemoveContainer" containerID="b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25" Jan 23 23:58:55.165424 containerd[2034]: time="2026-01-23T23:58:55.165381852Z" level=info msg="RemoveContainer for \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\"" Jan 23 23:58:55.171606 containerd[2034]: time="2026-01-23T23:58:55.171557126Z" level=info msg="RemoveContainer for \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\" returns successfully" Jan 23 23:58:55.172138 kubelet[3494]: I0123 23:58:55.172104 3494 scope.go:117] "RemoveContainer" containerID="737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59" Jan 23 23:58:55.172789 containerd[2034]: time="2026-01-23T23:58:55.172735335Z" level=error msg="ContainerStatus for \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\": not found" Jan 23 23:58:55.173460 kubelet[3494]: E0123 23:58:55.173411 3494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\": not found" containerID="737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59" Jan 23 23:58:55.173651 kubelet[3494]: I0123 23:58:55.173597 3494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59"} err="failed to get container status \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\": rpc error: code = NotFound desc = an error occurred when try to find container \"737e8c7467333ab8031d07f1390a85ff239078067efc539b67b6f117142e8f59\": not found" Jan 23 23:58:55.173767 kubelet[3494]: I0123 23:58:55.173747 3494 scope.go:117] "RemoveContainer" containerID="5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b" Jan 23 23:58:55.174218 containerd[2034]: time="2026-01-23T23:58:55.174173990Z" level=error msg="ContainerStatus for \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\": not found" Jan 23 23:58:55.174811 kubelet[3494]: E0123 23:58:55.174765 3494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\": not found" containerID="5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b" Jan 23 23:58:55.175048 kubelet[3494]: I0123 23:58:55.174971 3494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b"} err="failed to get container status \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5efa5f48328dcf7820105dbec591ab6c3ef90446d2883ba40df8d40a2d03154b\": not found" Jan 23 23:58:55.175186 kubelet[3494]: I0123 23:58:55.175165 3494 scope.go:117] "RemoveContainer" containerID="512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f" Jan 23 23:58:55.176144 containerd[2034]: time="2026-01-23T23:58:55.176092873Z" level=error msg="ContainerStatus for \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\": not found" Jan 23 23:58:55.176526 kubelet[3494]: E0123 23:58:55.176488 3494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\": not found" containerID="512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f" Jan 23 23:58:55.176781 kubelet[3494]: I0123 23:58:55.176747 3494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f"} err="failed to get container status \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\": rpc error: code = NotFound desc = an error occurred when try to find container \"512046acf35d36e96bd3ab6e2c7849c465b99153f452dcaa9ea49a9ae81b137f\": not found" Jan 23 23:58:55.176924 kubelet[3494]: I0123 23:58:55.176902 3494 scope.go:117] "RemoveContainer" containerID="208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb" Jan 23 23:58:55.177432 containerd[2034]: time="2026-01-23T23:58:55.177386747Z" level=error msg="ContainerStatus for \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\": not found" Jan 23 23:58:55.177893 kubelet[3494]: E0123 23:58:55.177860 3494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\": not found" containerID="208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb" Jan 23 23:58:55.178118 kubelet[3494]: I0123 23:58:55.178083 3494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb"} err="failed to get container status \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"208b584bbf71dd335a0aadc0ed09136f3e7bafb631f70e2b1699c5263d60c3fb\": not found" Jan 23 23:58:55.178241 kubelet[3494]: I0123 23:58:55.178221 3494 scope.go:117] "RemoveContainer" containerID="b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25" Jan 23 23:58:55.178755 containerd[2034]: time="2026-01-23T23:58:55.178677116Z" level=error msg="ContainerStatus for \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\": not found" Jan 23 23:58:55.179039 kubelet[3494]: E0123 23:58:55.178913 3494 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\": not found" containerID="b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25" Jan 23 23:58:55.179039 kubelet[3494]: I0123 23:58:55.178951 3494 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25"} err="failed to get container status \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5eae899eabd8c27b30cf799ab83bb8678d0440bba363ce3a4a5faf738162b25\": not found" Jan 23 23:58:55.559916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5-rootfs.mount: Deactivated successfully. Jan 23 23:58:55.560113 systemd[1]: var-lib-kubelet-pods-22027dd0\x2d325f\x2d4f76\x2dbb82\x2d619d4ce38ab7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6q8l.mount: Deactivated successfully. Jan 23 23:58:55.560266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1-rootfs.mount: Deactivated successfully. Jan 23 23:58:55.560394 systemd[1]: var-lib-kubelet-pods-eab42098\x2d2c31\x2d4390\x2dab72\x2d1652dff74b60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm8trv.mount: Deactivated successfully. Jan 23 23:58:55.560528 systemd[1]: var-lib-kubelet-pods-22027dd0\x2d325f\x2d4f76\x2dbb82\x2d619d4ce38ab7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 23:58:55.560700 systemd[1]: var-lib-kubelet-pods-22027dd0\x2d325f\x2d4f76\x2dbb82\x2d619d4ce38ab7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 23:58:55.581062 kubelet[3494]: I0123 23:58:55.580479 3494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22027dd0-325f-4f76-bb82-619d4ce38ab7" path="/var/lib/kubelet/pods/22027dd0-325f-4f76-bb82-619d4ce38ab7/volumes" Jan 23 23:58:55.582200 kubelet[3494]: I0123 23:58:55.582166 3494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab42098-2c31-4390-ab72-1652dff74b60" path="/var/lib/kubelet/pods/eab42098-2c31-4390-ab72-1652dff74b60/volumes" Jan 23 23:58:55.811916 kubelet[3494]: E0123 23:58:55.811759 3494 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:58:56.527784 sshd[5092]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:56.533949 systemd-logind[2007]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:58:56.534432 systemd[1]: sshd@23-172.31.30.184:22-4.153.228.146:54778.service: Deactivated successfully. Jan 23 23:58:56.538930 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:58:56.541133 systemd[1]: session-24.scope: Consumed 1.918s CPU time. Jan 23 23:58:56.546606 systemd-logind[2007]: Removed session 24. Jan 23 23:58:56.613524 systemd[1]: Started sshd@24-172.31.30.184:22-4.153.228.146:49566.service - OpenSSH per-connection server daemon (4.153.228.146:49566). Jan 23 23:58:57.114479 sshd[5250]: Accepted publickey for core from 4.153.228.146 port 49566 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:57.117158 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:57.125604 systemd-logind[2007]: New session 25 of user core. Jan 23 23:58:57.128268 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:58:57.378904 kubelet[3494]: I0123 23:58:57.378737 3494 setters.go:602] "Node became not ready" node="ip-172-31-30-184" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T23:58:57Z","lastTransitionTime":"2026-01-23T23:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 23:58:57.581961 ntpd[2000]: Deleting interface #11 lxc_health, fe80::64f9:74ff:fe5d:ee71%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Jan 23 23:58:57.583504 ntpd[2000]: 23 Jan 23:58:57 ntpd[2000]: Deleting interface #11 lxc_health, fe80::64f9:74ff:fe5d:ee71%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Jan 23 23:59:00.619223 kubelet[3494]: I0123 23:59:00.619092 3494 memory_manager.go:355] "RemoveStaleState removing state" podUID="eab42098-2c31-4390-ab72-1652dff74b60" containerName="cilium-operator" Jan 23 23:59:00.619223 kubelet[3494]: I0123 23:59:00.619163 3494 memory_manager.go:355] "RemoveStaleState removing state" podUID="22027dd0-325f-4f76-bb82-619d4ce38ab7" containerName="cilium-agent" Jan 23 23:59:00.640174 systemd[1]: Created slice kubepods-burstable-podf189f2b0_076f_4974_8f52_72bd4c655ca4.slice - libcontainer container kubepods-burstable-podf189f2b0_076f_4974_8f52_72bd4c655ca4.slice. Jan 23 23:59:00.653878 sshd[5250]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:00.666929 systemd[1]: sshd@24-172.31.30.184:22-4.153.228.146:49566.service: Deactivated successfully. Jan 23 23:59:00.667788 systemd-logind[2007]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:59:00.678996 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:59:00.679881 systemd[1]: session-25.scope: Consumed 3.068s CPU time. Jan 23 23:59:00.682482 systemd-logind[2007]: Removed session 25. Jan 23 23:59:00.750563 systemd[1]: Started sshd@25-172.31.30.184:22-4.153.228.146:49576.service - OpenSSH per-connection server daemon (4.153.228.146:49576). Jan 23 23:59:00.776458 kubelet[3494]: I0123 23:59:00.776398 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-cni-path\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776592 kubelet[3494]: I0123 23:59:00.776469 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-etc-cni-netd\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776592 kubelet[3494]: I0123 23:59:00.776511 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-host-proc-sys-net\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776592 kubelet[3494]: I0123 23:59:00.776552 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbknl\" (UniqueName: \"kubernetes.io/projected/f189f2b0-076f-4974-8f52-72bd4c655ca4-kube-api-access-nbknl\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776770 kubelet[3494]: I0123 23:59:00.776598 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-xtables-lock\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776770 kubelet[3494]: I0123 23:59:00.776633 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-host-proc-sys-kernel\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776770 kubelet[3494]: I0123 23:59:00.776670 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f189f2b0-076f-4974-8f52-72bd4c655ca4-clustermesh-secrets\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776770 kubelet[3494]: I0123 23:59:00.776706 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f189f2b0-076f-4974-8f52-72bd4c655ca4-cilium-config-path\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.776770 kubelet[3494]: I0123 23:59:00.776739 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f189f2b0-076f-4974-8f52-72bd4c655ca4-cilium-ipsec-secrets\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.778314 kubelet[3494]: I0123 23:59:00.776771 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-hostproc\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.778314 kubelet[3494]: I0123 23:59:00.776806 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-cilium-cgroup\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.778314 kubelet[3494]: I0123 23:59:00.776846 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-cilium-run\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.778314 kubelet[3494]: I0123 23:59:00.776883 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f189f2b0-076f-4974-8f52-72bd4c655ca4-hubble-tls\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.778314 kubelet[3494]: I0123 23:59:00.776920 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-bpf-maps\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.778314 kubelet[3494]: I0123 23:59:00.776958 3494 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f189f2b0-076f-4974-8f52-72bd4c655ca4-lib-modules\") pod \"cilium-gfnjr\" (UID: \"f189f2b0-076f-4974-8f52-72bd4c655ca4\") " pod="kube-system/cilium-gfnjr" Jan 23 23:59:00.813593 kubelet[3494]: E0123 23:59:00.813538 3494 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:59:00.949165 containerd[2034]: time="2026-01-23T23:59:00.948442441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gfnjr,Uid:f189f2b0-076f-4974-8f52-72bd4c655ca4,Namespace:kube-system,Attempt:0,}" Jan 23 23:59:00.994657 containerd[2034]: time="2026-01-23T23:59:00.994502332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:59:00.995743 containerd[2034]: time="2026-01-23T23:59:00.995405435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:59:00.995743 containerd[2034]: time="2026-01-23T23:59:00.995540575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:00.996095 containerd[2034]: time="2026-01-23T23:59:00.995998099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:59:01.026350 systemd[1]: Started cri-containerd-1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2.scope - libcontainer container 1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2. Jan 23 23:59:01.070241 containerd[2034]: time="2026-01-23T23:59:01.069972469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gfnjr,Uid:f189f2b0-076f-4974-8f52-72bd4c655ca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\"" Jan 23 23:59:01.075939 containerd[2034]: time="2026-01-23T23:59:01.075756383Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:59:01.103251 containerd[2034]: time="2026-01-23T23:59:01.102157637Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464\"" Jan 23 23:59:01.105721 containerd[2034]: time="2026-01-23T23:59:01.105629809Z" level=info msg="StartContainer for \"a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464\"" Jan 23 23:59:01.153326 systemd[1]: Started cri-containerd-a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464.scope - libcontainer container a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464. Jan 23 23:59:01.209799 containerd[2034]: time="2026-01-23T23:59:01.209534044Z" level=info msg="StartContainer for \"a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464\" returns successfully" Jan 23 23:59:01.232192 systemd[1]: cri-containerd-a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464.scope: Deactivated successfully. Jan 23 23:59:01.267726 sshd[5262]: Accepted publickey for core from 4.153.228.146 port 49576 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:59:01.270686 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:01.282732 systemd-logind[2007]: New session 26 of user core. Jan 23 23:59:01.292268 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 23:59:01.299672 containerd[2034]: time="2026-01-23T23:59:01.299579524Z" level=info msg="shim disconnected" id=a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464 namespace=k8s.io Jan 23 23:59:01.299870 containerd[2034]: time="2026-01-23T23:59:01.299671382Z" level=warning msg="cleaning up after shim disconnected" id=a92f58f66f7f7496454bc003b7061e4b6dddef4a387883044092e47d3a093464 namespace=k8s.io Jan 23 23:59:01.299870 containerd[2034]: time="2026-01-23T23:59:01.299697940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:01.323320 containerd[2034]: time="2026-01-23T23:59:01.322120201Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:59:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:59:01.619379 sshd[5262]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:01.626045 systemd-logind[2007]: Session 26 logged out. Waiting for processes to exit. Jan 23 23:59:01.627084 systemd[1]: sshd@25-172.31.30.184:22-4.153.228.146:49576.service: Deactivated successfully. Jan 23 23:59:01.630950 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 23:59:01.634575 systemd-logind[2007]: Removed session 26. Jan 23 23:59:01.714550 systemd[1]: Started sshd@26-172.31.30.184:22-4.153.228.146:49580.service - OpenSSH per-connection server daemon (4.153.228.146:49580). Jan 23 23:59:02.119973 containerd[2034]: time="2026-01-23T23:59:02.119918259Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:59:02.152952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141534485.mount: Deactivated successfully. Jan 23 23:59:02.157463 containerd[2034]: time="2026-01-23T23:59:02.157383810Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a\"" Jan 23 23:59:02.159844 containerd[2034]: time="2026-01-23T23:59:02.159779007Z" level=info msg="StartContainer for \"2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a\"" Jan 23 23:59:02.215333 sshd[5381]: Accepted publickey for core from 4.153.228.146 port 49580 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:59:02.221196 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:59:02.223083 systemd[1]: Started cri-containerd-2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a.scope - libcontainer container 2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a. Jan 23 23:59:02.239386 systemd-logind[2007]: New session 27 of user core. Jan 23 23:59:02.249316 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 23:59:02.305061 containerd[2034]: time="2026-01-23T23:59:02.302495591Z" level=info msg="StartContainer for \"2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a\" returns successfully" Jan 23 23:59:02.318441 systemd[1]: cri-containerd-2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a.scope: Deactivated successfully. Jan 23 23:59:02.394809 containerd[2034]: time="2026-01-23T23:59:02.393718404Z" level=info msg="shim disconnected" id=2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a namespace=k8s.io Jan 23 23:59:02.394809 containerd[2034]: time="2026-01-23T23:59:02.393865177Z" level=warning msg="cleaning up after shim disconnected" id=2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a namespace=k8s.io Jan 23 23:59:02.394809 containerd[2034]: time="2026-01-23T23:59:02.393888517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:02.891868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3fe2b8079e16f3ab630c0e3ff312b25e43d62729a07bba7b70d765e991435a-rootfs.mount: Deactivated successfully. Jan 23 23:59:03.126506 containerd[2034]: time="2026-01-23T23:59:03.126330232Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:59:03.161121 containerd[2034]: time="2026-01-23T23:59:03.160087959Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f\"" Jan 23 23:59:03.161121 containerd[2034]: time="2026-01-23T23:59:03.160860197Z" level=info msg="StartContainer for \"5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f\"" Jan 23 23:59:03.263214 systemd[1]: Started cri-containerd-5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f.scope - libcontainer container 5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f. Jan 23 23:59:03.346980 containerd[2034]: time="2026-01-23T23:59:03.346814216Z" level=info msg="StartContainer for \"5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f\" returns successfully" Jan 23 23:59:03.377169 systemd[1]: cri-containerd-5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f.scope: Deactivated successfully. Jan 23 23:59:03.449497 containerd[2034]: time="2026-01-23T23:59:03.447709796Z" level=info msg="shim disconnected" id=5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f namespace=k8s.io Jan 23 23:59:03.449497 containerd[2034]: time="2026-01-23T23:59:03.447791004Z" level=warning msg="cleaning up after shim disconnected" id=5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f namespace=k8s.io Jan 23 23:59:03.449497 containerd[2034]: time="2026-01-23T23:59:03.447813900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:03.891929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b6968f4b4be35e9fbecfc74402bf6c2dd2ead9ca71ca16f1d5f54bd1f67a46f-rootfs.mount: Deactivated successfully. Jan 23 23:59:04.130985 containerd[2034]: time="2026-01-23T23:59:04.130895779Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:59:04.163787 containerd[2034]: time="2026-01-23T23:59:04.163623871Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a\"" Jan 23 23:59:04.165153 containerd[2034]: time="2026-01-23T23:59:04.165088123Z" level=info msg="StartContainer for \"6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a\"" Jan 23 23:59:04.234354 systemd[1]: Started cri-containerd-6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a.scope - libcontainer container 6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a. Jan 23 23:59:04.282876 systemd[1]: cri-containerd-6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a.scope: Deactivated successfully. Jan 23 23:59:04.286811 containerd[2034]: time="2026-01-23T23:59:04.286571075Z" level=info msg="StartContainer for \"6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a\" returns successfully" Jan 23 23:59:04.347137 containerd[2034]: time="2026-01-23T23:59:04.346710162Z" level=info msg="shim disconnected" id=6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a namespace=k8s.io Jan 23 23:59:04.347137 containerd[2034]: time="2026-01-23T23:59:04.346782426Z" level=warning msg="cleaning up after shim disconnected" id=6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a namespace=k8s.io Jan 23 23:59:04.347137 containerd[2034]: time="2026-01-23T23:59:04.346802500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:04.891983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d01d9560d8aee1b675345dc975d2a414db9dd37044a9f4a107b2b0cdf9c3a7a-rootfs.mount: Deactivated successfully. Jan 23 23:59:05.139261 containerd[2034]: time="2026-01-23T23:59:05.138867128Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:59:05.174586 containerd[2034]: time="2026-01-23T23:59:05.172862897Z" level=info msg="CreateContainer within sandbox \"1b233841e3961b0ecf5ed9517ffeb22db4fe0a62b4dda4df400f28ae0a0868c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1c4a961b1809bac9343c5e6bb02fc267a5aeb7460dacddfe92258b70f354b52\"" Jan 23 23:59:05.177994 containerd[2034]: time="2026-01-23T23:59:05.177540327Z" level=info msg="StartContainer for \"c1c4a961b1809bac9343c5e6bb02fc267a5aeb7460dacddfe92258b70f354b52\"" Jan 23 23:59:05.245323 systemd[1]: Started cri-containerd-c1c4a961b1809bac9343c5e6bb02fc267a5aeb7460dacddfe92258b70f354b52.scope - libcontainer container c1c4a961b1809bac9343c5e6bb02fc267a5aeb7460dacddfe92258b70f354b52. Jan 23 23:59:05.306040 containerd[2034]: time="2026-01-23T23:59:05.304705623Z" level=info msg="StartContainer for \"c1c4a961b1809bac9343c5e6bb02fc267a5aeb7460dacddfe92258b70f354b52\" returns successfully" Jan 23 23:59:05.631946 containerd[2034]: time="2026-01-23T23:59:05.631830478Z" level=info msg="StopPodSandbox for \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\"" Jan 23 23:59:05.632245 containerd[2034]: time="2026-01-23T23:59:05.631996184Z" level=info msg="TearDown network for sandbox \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" successfully" Jan 23 23:59:05.632245 containerd[2034]: time="2026-01-23T23:59:05.632056094Z" level=info msg="StopPodSandbox for \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" returns successfully" Jan 23 23:59:05.634140 containerd[2034]: time="2026-01-23T23:59:05.633091744Z" level=info msg="RemovePodSandbox for \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\"" Jan 23 23:59:05.634140 containerd[2034]: time="2026-01-23T23:59:05.633148604Z" level=info msg="Forcibly stopping sandbox \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\"" Jan 23 23:59:05.634140 containerd[2034]: time="2026-01-23T23:59:05.633244712Z" level=info msg="TearDown network for sandbox \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" successfully" Jan 23 23:59:05.640416 containerd[2034]: time="2026-01-23T23:59:05.640308935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:05.640719 containerd[2034]: time="2026-01-23T23:59:05.640685047Z" level=info msg="RemovePodSandbox \"d3e8dda1095c8dc99943833bb6ad4e842f0d826eaba27b8da9052f331c5f50d1\" returns successfully" Jan 23 23:59:05.642047 containerd[2034]: time="2026-01-23T23:59:05.641608668Z" level=info msg="StopPodSandbox for \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\"" Jan 23 23:59:05.642047 containerd[2034]: time="2026-01-23T23:59:05.641736760Z" level=info msg="TearDown network for sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" successfully" Jan 23 23:59:05.642047 containerd[2034]: time="2026-01-23T23:59:05.641759680Z" level=info msg="StopPodSandbox for \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" returns successfully" Jan 23 23:59:05.643134 containerd[2034]: time="2026-01-23T23:59:05.642961648Z" level=info msg="RemovePodSandbox for \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\"" Jan 23 23:59:05.643134 containerd[2034]: time="2026-01-23T23:59:05.643061778Z" level=info msg="Forcibly stopping sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\"" Jan 23 23:59:05.643415 containerd[2034]: time="2026-01-23T23:59:05.643172498Z" level=info msg="TearDown network for sandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" successfully" Jan 23 23:59:05.649449 containerd[2034]: time="2026-01-23T23:59:05.649192750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:59:05.649449 containerd[2034]: time="2026-01-23T23:59:05.649308500Z" level=info msg="RemovePodSandbox \"52e6157f14f965c40b08a5f2e45edbabaa5996348fd9eb864f0a64193cd918a5\" returns successfully" Jan 23 23:59:06.173482 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 23:59:06.178198 kubelet[3494]: I0123 23:59:06.177855 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gfnjr" podStartSLOduration=6.177830633 podStartE2EDuration="6.177830633s" podCreationTimestamp="2026-01-23 23:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:59:06.175872791 +0000 UTC m=+120.845929043" watchObservedRunningTime="2026-01-23 23:59:06.177830633 +0000 UTC m=+120.847886885" Jan 23 23:59:10.482654 systemd-networkd[1917]: lxc_health: Link UP Jan 23 23:59:10.491633 systemd-networkd[1917]: lxc_health: Gained carrier Jan 23 23:59:10.495943 (udev-worker)[6121]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:59:11.971338 systemd-networkd[1917]: lxc_health: Gained IPv6LL Jan 23 23:59:14.581501 ntpd[2000]: Listen normally on 14 lxc_health [fe80::10d3:f7ff:fe9e:74c%14]:123 Jan 23 23:59:14.582094 ntpd[2000]: 23 Jan 23:59:14 ntpd[2000]: Listen normally on 14 lxc_health [fe80::10d3:f7ff:fe9e:74c%14]:123 Jan 23 23:59:16.140375 sshd[5381]: pam_unix(sshd:session): session closed for user core Jan 23 23:59:16.149480 systemd[1]: sshd@26-172.31.30.184:22-4.153.228.146:49580.service: Deactivated successfully. Jan 23 23:59:16.155487 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 23:59:16.158546 systemd-logind[2007]: Session 27 logged out. Waiting for processes to exit. Jan 23 23:59:16.162055 systemd-logind[2007]: Removed session 27. Jan 23 23:59:31.451789 systemd[1]: cri-containerd-263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa.scope: Deactivated successfully. Jan 23 23:59:31.453117 systemd[1]: cri-containerd-263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa.scope: Consumed 7.354s CPU time, 17.7M memory peak, 0B memory swap peak. Jan 23 23:59:31.499312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa-rootfs.mount: Deactivated successfully. Jan 23 23:59:31.505742 containerd[2034]: time="2026-01-23T23:59:31.505398668Z" level=info msg="shim disconnected" id=263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa namespace=k8s.io Jan 23 23:59:31.505742 containerd[2034]: time="2026-01-23T23:59:31.505476215Z" level=warning msg="cleaning up after shim disconnected" id=263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa namespace=k8s.io Jan 23 23:59:31.505742 containerd[2034]: time="2026-01-23T23:59:31.505497958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:32.219539 kubelet[3494]: I0123 23:59:32.219494 3494 scope.go:117] "RemoveContainer" containerID="263147a498887f38e84adeec4e87ea97ff7c18268b388c396a35ff8a3f9de6aa" Jan 23 23:59:32.223466 containerd[2034]: time="2026-01-23T23:59:32.223405533Z" level=info msg="CreateContainer within sandbox \"6114e646b06ef10949d53fc031965cdb5f386a4c407a2c8fbbf0ce6f07e3bb94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:59:32.246637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404950213.mount: Deactivated successfully. Jan 23 23:59:32.251566 containerd[2034]: time="2026-01-23T23:59:32.251499610Z" level=info msg="CreateContainer within sandbox \"6114e646b06ef10949d53fc031965cdb5f386a4c407a2c8fbbf0ce6f07e3bb94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ab21f16179afb1db881223bfa762c3de612c73aac584d303ad10c8947f88b124\"" Jan 23 23:59:32.252259 containerd[2034]: time="2026-01-23T23:59:32.252217076Z" level=info msg="StartContainer for \"ab21f16179afb1db881223bfa762c3de612c73aac584d303ad10c8947f88b124\"" Jan 23 23:59:32.309335 systemd[1]: Started cri-containerd-ab21f16179afb1db881223bfa762c3de612c73aac584d303ad10c8947f88b124.scope - libcontainer container ab21f16179afb1db881223bfa762c3de612c73aac584d303ad10c8947f88b124. Jan 23 23:59:32.376721 containerd[2034]: time="2026-01-23T23:59:32.376655677Z" level=info msg="StartContainer for \"ab21f16179afb1db881223bfa762c3de612c73aac584d303ad10c8947f88b124\" returns successfully" Jan 23 23:59:36.500272 systemd[1]: cri-containerd-d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6.scope: Deactivated successfully. Jan 23 23:59:36.500781 systemd[1]: cri-containerd-d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6.scope: Consumed 4.741s CPU time, 16.0M memory peak, 0B memory swap peak. Jan 23 23:59:36.558579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6-rootfs.mount: Deactivated successfully. Jan 23 23:59:36.573219 containerd[2034]: time="2026-01-23T23:59:36.573093845Z" level=info msg="shim disconnected" id=d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6 namespace=k8s.io Jan 23 23:59:36.573219 containerd[2034]: time="2026-01-23T23:59:36.573218515Z" level=warning msg="cleaning up after shim disconnected" id=d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6 namespace=k8s.io Jan 23 23:59:36.573965 containerd[2034]: time="2026-01-23T23:59:36.573240834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:37.236229 kubelet[3494]: I0123 23:59:37.236184 3494 scope.go:117] "RemoveContainer" containerID="d65d3f550defcb81906162ed696cc4fdee7f90376cc573a6439161d3a260ffa6" Jan 23 23:59:37.239109 containerd[2034]: time="2026-01-23T23:59:37.239042730Z" level=info msg="CreateContainer within sandbox \"5682a7b0fac1f0c560e91959fae60823b53813b25f286e22495a3a7b1cea6046\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:59:37.263870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366181542.mount: Deactivated successfully. Jan 23 23:59:37.272502 containerd[2034]: time="2026-01-23T23:59:37.272290574Z" level=info msg="CreateContainer within sandbox \"5682a7b0fac1f0c560e91959fae60823b53813b25f286e22495a3a7b1cea6046\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"99608a96c116b7934d243fa29205e969e6043130bc52927931a4cc7049501419\"" Jan 23 23:59:37.273637 containerd[2034]: time="2026-01-23T23:59:37.273331674Z" level=info msg="StartContainer for \"99608a96c116b7934d243fa29205e969e6043130bc52927931a4cc7049501419\"" Jan 23 23:59:37.322360 systemd[1]: Started cri-containerd-99608a96c116b7934d243fa29205e969e6043130bc52927931a4cc7049501419.scope - libcontainer container 99608a96c116b7934d243fa29205e969e6043130bc52927931a4cc7049501419. Jan 23 23:59:37.388399 containerd[2034]: time="2026-01-23T23:59:37.388303328Z" level=info msg="StartContainer for \"99608a96c116b7934d243fa29205e969e6043130bc52927931a4cc7049501419\" returns successfully" Jan 23 23:59:39.176520 kubelet[3494]: E0123 23:59:39.176445 3494 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-184?timeout=10s\": context deadline exceeded" Jan 23 23:59:49.177250 kubelet[3494]: E0123 23:59:49.176905 3494 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-184?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"