Jan 23 23:53:57.323316 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:53:57.323363 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:53:57.323389 kernel: KASLR disabled due to lack of seed Jan 23 23:53:57.323406 kernel: efi: EFI v2.7 by EDK II Jan 23 23:53:57.323422 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:53:57.323439 kernel: ACPI: Early table checksum verification disabled Jan 23 23:53:57.323457 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:53:57.323473 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:53:57.323489 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:53:57.323505 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:53:57.323526 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:53:57.323543 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:53:57.323558 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:53:57.323574 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:53:57.323593 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:53:57.323614 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:53:57.323631 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:53:57.323648 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:53:57.323686 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:53:57.323705 kernel: printk: bootconsole [uart0] enabled Jan 23 23:53:57.323722 kernel: NUMA: Failed to initialise from firmware Jan 23 23:53:57.323740 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:53:57.323757 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:53:57.323773 kernel: Zone ranges: Jan 23 23:53:57.323790 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:53:57.323807 kernel: DMA32 empty Jan 23 23:53:57.323829 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:53:57.323846 kernel: Movable zone start for each node Jan 23 23:53:57.323863 kernel: Early memory node ranges Jan 23 23:53:57.323879 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:53:57.323896 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:53:57.323912 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:53:57.323929 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:53:57.323945 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:53:57.323962 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:53:57.323978 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:53:57.323995 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:53:57.324011 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:53:57.324033 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:53:57.324051 kernel: psci: probing for conduit method from ACPI. Jan 23 23:53:57.324074 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:53:57.324092 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:53:57.324110 kernel: psci: Trusted OS migration not required Jan 23 23:53:57.324132 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:53:57.324150 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:53:57.324168 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:53:57.324185 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:53:57.324203 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:53:57.324957 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:53:57.324985 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:53:57.325004 kernel: CPU features: detected: Spectre-v2 Jan 23 23:53:57.325022 kernel: CPU features: detected: Spectre-v3a Jan 23 23:53:57.325040 kernel: CPU features: detected: Spectre-BHB Jan 23 23:53:57.325059 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:53:57.325086 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:53:57.325105 kernel: alternatives: applying boot alternatives Jan 23 23:53:57.325126 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:53:57.325145 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:53:57.325163 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:53:57.325181 kernel: Fallback order for Node 0: 0 Jan 23 23:53:57.325199 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:53:57.325254 kernel: Policy zone: Normal Jan 23 23:53:57.325278 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:53:57.325296 kernel: software IO TLB: area num 2. Jan 23 23:53:57.325314 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:53:57.325340 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:53:57.325361 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:53:57.325379 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:53:57.325398 kernel: rcu: RCU event tracing is enabled. Jan 23 23:53:57.325417 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:53:57.325435 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:53:57.325453 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:53:57.325471 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:53:57.325489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:53:57.325507 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:53:57.325524 kernel: GICv3: 96 SPIs implemented Jan 23 23:53:57.325547 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:53:57.325565 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:53:57.325582 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:53:57.325600 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:53:57.325617 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:53:57.325635 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:53:57.325654 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:53:57.325673 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:53:57.325692 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:53:57.325709 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:53:57.325727 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:53:57.325745 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:53:57.325768 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:53:57.325786 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:53:57.325804 kernel: Console: colour dummy device 80x25 Jan 23 23:53:57.325822 kernel: printk: console [tty1] enabled Jan 23 23:53:57.325840 kernel: ACPI: Core revision 20230628 Jan 23 23:53:57.325859 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:53:57.325878 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:53:57.325896 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:53:57.325914 kernel: landlock: Up and running. Jan 23 23:53:57.325937 kernel: SELinux: Initializing. Jan 23 23:53:57.325956 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:53:57.325974 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:53:57.325992 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:53:57.326010 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:53:57.326030 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:53:57.326048 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:53:57.326066 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:53:57.326084 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:53:57.326106 kernel: Remapping and enabling EFI services. Jan 23 23:53:57.326124 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:53:57.326142 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:53:57.326160 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:53:57.326178 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:53:57.326196 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:53:57.326239 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:53:57.326266 kernel: SMP: Total of 2 processors activated. Jan 23 23:53:57.326284 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:53:57.326310 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:53:57.326329 kernel: CPU features: detected: CRC32 instructions Jan 23 23:53:57.326348 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:53:57.326378 kernel: alternatives: applying system-wide alternatives Jan 23 23:53:57.326401 kernel: devtmpfs: initialized Jan 23 23:53:57.326420 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:53:57.326439 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:53:57.326458 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:53:57.326477 kernel: SMBIOS 3.0.0 present. Jan 23 23:53:57.326500 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:53:57.326519 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:53:57.326538 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:53:57.326557 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:53:57.326577 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:53:57.326597 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:53:57.326616 kernel: audit: type=2000 audit(0.300:1): state=initialized audit_enabled=0 res=1 Jan 23 23:53:57.326636 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:53:57.326660 kernel: cpuidle: using governor menu Jan 23 23:53:57.326681 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:53:57.326700 kernel: ASID allocator initialised with 65536 entries Jan 23 23:53:57.326720 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:53:57.326739 kernel: Serial: AMBA PL011 UART driver Jan 23 23:53:57.326758 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:53:57.326777 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:53:57.326797 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:53:57.326817 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:53:57.326840 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:53:57.326860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:53:57.326879 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:53:57.326897 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:53:57.326916 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:53:57.326935 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:53:57.326954 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:53:57.326972 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:53:57.326991 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:53:57.327015 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:53:57.327034 kernel: ACPI: Interpreter enabled Jan 23 23:53:57.327052 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:53:57.327071 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:53:57.327090 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:53:57.330123 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:53:57.330406 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:53:57.330615 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:53:57.330829 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:53:57.331038 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:53:57.331065 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:53:57.331084 kernel: acpiphp: Slot [1] registered Jan 23 23:53:57.331104 kernel: acpiphp: Slot [2] registered Jan 23 23:53:57.331123 kernel: acpiphp: Slot [3] registered Jan 23 23:53:57.331142 kernel: acpiphp: Slot [4] registered Jan 23 23:53:57.331161 kernel: acpiphp: Slot [5] registered Jan 23 23:53:57.331186 kernel: acpiphp: Slot [6] registered Jan 23 23:53:57.331205 kernel: acpiphp: Slot [7] registered Jan 23 23:53:57.331247 kernel: acpiphp: Slot [8] registered Jan 23 23:53:57.331269 kernel: acpiphp: Slot [9] registered Jan 23 23:53:57.331288 kernel: acpiphp: Slot [10] registered Jan 23 23:53:57.331307 kernel: acpiphp: Slot [11] registered Jan 23 23:53:57.331326 kernel: acpiphp: Slot [12] registered Jan 23 23:53:57.331345 kernel: acpiphp: Slot [13] registered Jan 23 23:53:57.331364 kernel: acpiphp: Slot [14] registered Jan 23 23:53:57.331382 kernel: acpiphp: Slot [15] registered Jan 23 23:53:57.331409 kernel: acpiphp: Slot [16] registered Jan 23 23:53:57.331428 kernel: acpiphp: Slot [17] registered Jan 23 23:53:57.331446 kernel: acpiphp: Slot [18] registered Jan 23 23:53:57.331465 kernel: acpiphp: Slot [19] registered Jan 23 23:53:57.331484 kernel: acpiphp: Slot [20] registered Jan 23 23:53:57.331504 kernel: acpiphp: Slot [21] registered Jan 23 23:53:57.331943 kernel: acpiphp: Slot [22] registered Jan 23 23:53:57.331966 kernel: acpiphp: Slot [23] registered Jan 23 23:53:57.331985 kernel: acpiphp: Slot [24] registered Jan 23 23:53:57.332010 kernel: acpiphp: Slot [25] registered Jan 23 23:53:57.332030 kernel: acpiphp: Slot [26] registered Jan 23 23:53:57.332049 kernel: acpiphp: Slot [27] registered Jan 23 23:53:57.332068 kernel: acpiphp: Slot [28] registered Jan 23 23:53:57.332087 kernel: acpiphp: Slot [29] registered Jan 23 23:53:57.332106 kernel: acpiphp: Slot [30] registered Jan 23 23:53:57.332126 kernel: acpiphp: Slot [31] registered Jan 23 23:53:57.332146 kernel: PCI host bridge to bus 0000:00 Jan 23 23:53:57.332462 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:53:57.332697 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:53:57.333027 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:53:57.333305 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:53:57.333582 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:53:57.333842 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:53:57.334097 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:53:57.334425 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:53:57.334668 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:53:57.334905 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:53:57.335153 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:53:57.335463 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:53:57.335717 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:53:57.335953 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:53:57.336189 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:53:57.336417 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:53:57.336614 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:53:57.336806 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:53:57.336833 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:53:57.336854 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:53:57.336873 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:53:57.336893 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:53:57.336920 kernel: iommu: Default domain type: Translated Jan 23 23:53:57.336940 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:53:57.336960 kernel: efivars: Registered efivars operations Jan 23 23:53:57.336979 kernel: vgaarb: loaded Jan 23 23:53:57.336998 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:53:57.337019 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:53:57.337038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:53:57.337058 kernel: pnp: PnP ACPI init Jan 23 23:53:57.337318 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:53:57.337355 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:53:57.337374 kernel: NET: Registered PF_INET protocol family Jan 23 23:53:57.337394 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:53:57.337416 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:53:57.337436 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:53:57.337456 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:53:57.337478 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:53:57.337497 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:53:57.337522 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:53:57.337542 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:53:57.337561 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:53:57.337579 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:53:57.337598 kernel: kvm [1]: HYP mode not available Jan 23 23:53:57.337617 kernel: Initialise system trusted keyrings Jan 23 23:53:57.337636 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:53:57.337655 kernel: Key type asymmetric registered Jan 23 23:53:57.337673 kernel: Asymmetric key parser 'x509' registered Jan 23 23:53:57.337697 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:53:57.337716 kernel: io scheduler mq-deadline registered Jan 23 23:53:57.337735 kernel: io scheduler kyber registered Jan 23 23:53:57.337754 kernel: io scheduler bfq registered Jan 23 23:53:57.337974 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:53:57.338002 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:53:57.338022 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:53:57.338041 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:53:57.338060 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:53:57.338085 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:53:57.338105 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:53:57.338365 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:53:57.338393 kernel: printk: console [ttyS0] disabled Jan 23 23:53:57.338414 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:53:57.338434 kernel: printk: console [ttyS0] enabled Jan 23 23:53:57.338452 kernel: printk: bootconsole [uart0] disabled Jan 23 23:53:57.338471 kernel: thunder_xcv, ver 1.0 Jan 23 23:53:57.338490 kernel: thunder_bgx, ver 1.0 Jan 23 23:53:57.338516 kernel: nicpf, ver 1.0 Jan 23 23:53:57.338534 kernel: nicvf, ver 1.0 Jan 23 23:53:57.338762 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:53:57.338965 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:53:56 UTC (1769212436) Jan 23 23:53:57.338992 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:53:57.339011 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:53:57.339031 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:53:57.339050 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:53:57.339075 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:53:57.339094 kernel: Segment Routing with IPv6 Jan 23 23:53:57.339113 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:53:57.339131 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:53:57.339150 kernel: Key type dns_resolver registered Jan 23 23:53:57.339168 kernel: registered taskstats version 1 Jan 23 23:53:57.339188 kernel: Loading compiled-in X.509 certificates Jan 23 23:53:57.339206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:53:57.339247 kernel: Key type .fscrypt registered Jan 23 23:53:57.339274 kernel: Key type fscrypt-provisioning registered Jan 23 23:53:57.339293 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:53:57.339312 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:53:57.339331 kernel: ima: No architecture policies found Jan 23 23:53:57.339350 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:53:57.339369 kernel: clk: Disabling unused clocks Jan 23 23:53:57.339387 kernel: Freeing unused kernel memory: 39424K Jan 23 23:53:57.339406 kernel: Run /init as init process Jan 23 23:53:57.339424 kernel: with arguments: Jan 23 23:53:57.339448 kernel: /init Jan 23 23:53:57.339466 kernel: with environment: Jan 23 23:53:57.339484 kernel: HOME=/ Jan 23 23:53:57.339503 kernel: TERM=linux Jan 23 23:53:57.339526 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:53:57.339550 systemd[1]: Detected virtualization amazon. Jan 23 23:53:57.339571 systemd[1]: Detected architecture arm64. Jan 23 23:53:57.339592 systemd[1]: Running in initrd. Jan 23 23:53:57.339617 systemd[1]: No hostname configured, using default hostname. Jan 23 23:53:57.339637 systemd[1]: Hostname set to . Jan 23 23:53:57.339674 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:53:57.339699 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:53:57.339720 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:53:57.339741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:53:57.339763 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:53:57.339784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:53:57.339811 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:53:57.339832 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:53:57.339856 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:53:57.339877 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:53:57.339898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:53:57.339918 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:53:57.339944 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:53:57.339967 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:53:57.339987 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:53:57.340009 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:53:57.340030 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:53:57.340052 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:53:57.340073 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:53:57.340094 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:53:57.340114 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:53:57.340140 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:53:57.340161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:53:57.340181 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:53:57.340201 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:53:57.340244 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:53:57.340268 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:53:57.340288 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:53:57.340309 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:53:57.340330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:53:57.340358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:57.340379 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:53:57.340400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:53:57.340420 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:53:57.340477 systemd-journald[252]: Collecting audit messages is disabled. Jan 23 23:53:57.340527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:53:57.340548 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:53:57.340568 systemd-journald[252]: Journal started Jan 23 23:53:57.340611 systemd-journald[252]: Runtime Journal (/run/log/journal/ec247601b0aa6288aee9e0fa4bac0152) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:53:57.291755 systemd-modules-load[253]: Inserted module 'overlay' Jan 23 23:53:57.349552 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:53:57.353862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:57.363474 kernel: Bridge firewalling registered Jan 23 23:53:57.356622 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 23 23:53:57.366020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:53:57.375617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:53:57.387680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:53:57.397472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:53:57.404695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:53:57.418604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:53:57.466568 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:53:57.472705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:57.483394 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:53:57.490513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:53:57.503549 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:53:57.519626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:53:57.549872 dracut-cmdline[289]: dracut-dracut-053 Jan 23 23:53:57.557110 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:53:57.601898 systemd-resolved[290]: Positive Trust Anchors: Jan 23 23:53:57.601933 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:53:57.601998 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:53:57.706261 kernel: SCSI subsystem initialized Jan 23 23:53:57.714262 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:53:57.727257 kernel: iscsi: registered transport (tcp) Jan 23 23:53:57.750933 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:53:57.751010 kernel: QLogic iSCSI HBA Driver Jan 23 23:53:57.839423 kernel: random: crng init done Jan 23 23:53:57.839865 systemd-resolved[290]: Defaulting to hostname 'linux'. Jan 23 23:53:57.845154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:53:57.850473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:53:57.881309 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:53:57.893651 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:53:57.940031 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:53:57.940111 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:53:57.942146 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:53:58.013278 kernel: raid6: neonx8 gen() 6665 MB/s Jan 23 23:53:58.030274 kernel: raid6: neonx4 gen() 6500 MB/s Jan 23 23:53:58.047269 kernel: raid6: neonx2 gen() 5421 MB/s Jan 23 23:53:58.064271 kernel: raid6: neonx1 gen() 3931 MB/s Jan 23 23:53:58.081268 kernel: raid6: int64x8 gen() 3795 MB/s Jan 23 23:53:58.098281 kernel: raid6: int64x4 gen() 3689 MB/s Jan 23 23:53:58.116274 kernel: raid6: int64x2 gen() 3530 MB/s Jan 23 23:53:58.134681 kernel: raid6: int64x1 gen() 2695 MB/s Jan 23 23:53:58.134766 kernel: raid6: using algorithm neonx8 gen() 6665 MB/s Jan 23 23:53:58.153437 kernel: raid6: .... xor() 4718 MB/s, rmw enabled Jan 23 23:53:58.153542 kernel: raid6: using neon recovery algorithm Jan 23 23:53:58.163646 kernel: xor: measuring software checksum speed Jan 23 23:53:58.163753 kernel: 8regs : 10585 MB/sec Jan 23 23:53:58.164954 kernel: 32regs : 11189 MB/sec Jan 23 23:53:58.167422 kernel: arm64_neon : 8820 MB/sec Jan 23 23:53:58.167518 kernel: xor: using function: 32regs (11189 MB/sec) Jan 23 23:53:58.256295 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:53:58.280349 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:53:58.292618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:53:58.346206 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 23 23:53:58.355590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:53:58.373579 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:53:58.413705 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jan 23 23:53:58.479629 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:53:58.491582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:53:58.635282 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:53:58.652004 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:53:58.695689 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:53:58.711559 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:53:58.715542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:53:58.721264 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:53:58.738898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:53:58.783578 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:53:58.885258 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:53:58.885346 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:53:58.895916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:53:58.899684 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:53:58.900098 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:53:58.896659 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:58.906884 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:53:58.909527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:53:58.909868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:58.913413 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:58.933806 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:90:a1:3e:21:ab Jan 23 23:53:58.934184 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:53:58.934260 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:53:58.935815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:53:58.954292 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:53:58.964318 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:53:58.964397 kernel: GPT:9289727 != 33554431 Jan 23 23:53:58.964426 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:53:58.965499 kernel: GPT:9289727 != 33554431 Jan 23 23:53:58.965587 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:53:58.965617 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:53:58.971829 (udev-worker)[515]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:53:58.983812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:53:59.008725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:53:59.062993 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:53:59.106743 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (516) Jan 23 23:53:59.114304 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (521) Jan 23 23:53:59.232315 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:53:59.268721 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:53:59.284293 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:53:59.286131 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:53:59.301146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:53:59.315533 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:53:59.331884 disk-uuid[661]: Primary Header is updated. Jan 23 23:53:59.331884 disk-uuid[661]: Secondary Entries is updated. Jan 23 23:53:59.331884 disk-uuid[661]: Secondary Header is updated. Jan 23 23:53:59.343250 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:53:59.354367 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:53:59.363356 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:00.367319 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:00.369196 disk-uuid[662]: The operation has completed successfully. Jan 23 23:54:00.573423 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:54:00.573652 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:54:00.631517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:54:00.654001 sh[1006]: Success Jan 23 23:54:00.681265 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:54:00.807750 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:54:00.811408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:54:00.823803 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:54:00.854683 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:54:00.854762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:00.856874 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:54:00.858424 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:54:00.859685 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:54:00.977256 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:54:00.993320 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:54:01.002015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:54:01.018657 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:54:01.025876 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:54:01.057449 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:01.057530 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:01.058998 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:01.074259 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:01.097654 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:54:01.103955 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:01.111989 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:54:01.125944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:54:01.261319 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:01.274613 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:01.340204 systemd-networkd[1206]: lo: Link UP Jan 23 23:54:01.340250 systemd-networkd[1206]: lo: Gained carrier Jan 23 23:54:01.344473 systemd-networkd[1206]: Enumeration completed Jan 23 23:54:01.345542 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:01.347739 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:01.347747 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:01.353826 systemd[1]: Reached target network.target - Network. Jan 23 23:54:01.359953 systemd-networkd[1206]: eth0: Link UP Jan 23 23:54:01.359961 systemd-networkd[1206]: eth0: Gained carrier Jan 23 23:54:01.359980 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:01.391371 systemd-networkd[1206]: eth0: DHCPv4 address 172.31.20.17/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:01.609244 ignition[1119]: Ignition 2.19.0 Jan 23 23:54:01.609269 ignition[1119]: Stage: fetch-offline Jan 23 23:54:01.614989 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:01.610968 ignition[1119]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:01.610993 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:01.611748 ignition[1119]: Ignition finished successfully Jan 23 23:54:01.630737 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:54:01.663082 ignition[1215]: Ignition 2.19.0 Jan 23 23:54:01.663109 ignition[1215]: Stage: fetch Jan 23 23:54:01.663882 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:01.663908 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:01.664074 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:01.680944 ignition[1215]: PUT result: OK Jan 23 23:54:01.686808 ignition[1215]: parsed url from cmdline: "" Jan 23 23:54:01.686881 ignition[1215]: no config URL provided Jan 23 23:54:01.686898 ignition[1215]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:54:01.686928 ignition[1215]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:54:01.686964 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:01.691627 ignition[1215]: PUT result: OK Jan 23 23:54:01.696206 ignition[1215]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:54:01.700923 ignition[1215]: GET result: OK Jan 23 23:54:01.701751 ignition[1215]: parsing config with SHA512: c8164155c1fdb20104fdcf74c036a172df5051d9ed07691d86ea44df8b00c8716ed8fa2590747fa1fc5dd53554a7994183ac47620046a93874c96b3f9790bae0 Jan 23 23:54:01.710630 unknown[1215]: fetched base config from "system" Jan 23 23:54:01.711130 unknown[1215]: fetched base config from "system" Jan 23 23:54:01.711963 ignition[1215]: fetch: fetch complete Jan 23 23:54:01.711144 unknown[1215]: fetched user config from "aws" Jan 23 23:54:01.711974 ignition[1215]: fetch: fetch passed Jan 23 23:54:01.712095 ignition[1215]: Ignition finished successfully Jan 23 23:54:01.725108 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:54:01.736711 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:54:01.769383 ignition[1221]: Ignition 2.19.0 Jan 23 23:54:01.769937 ignition[1221]: Stage: kargs Jan 23 23:54:01.771406 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:01.771441 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:01.771617 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:01.780410 ignition[1221]: PUT result: OK Jan 23 23:54:01.785308 ignition[1221]: kargs: kargs passed Jan 23 23:54:01.785449 ignition[1221]: Ignition finished successfully Jan 23 23:54:01.793296 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:54:01.801748 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:54:01.831166 ignition[1228]: Ignition 2.19.0 Jan 23 23:54:01.831808 ignition[1228]: Stage: disks Jan 23 23:54:01.833387 ignition[1228]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:01.833415 ignition[1228]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:01.833607 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:01.842182 ignition[1228]: PUT result: OK Jan 23 23:54:01.847930 ignition[1228]: disks: disks passed Jan 23 23:54:01.848341 ignition[1228]: Ignition finished successfully Jan 23 23:54:01.855323 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:54:01.858459 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:01.863319 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:54:01.871544 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:01.874025 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:01.877242 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:01.890722 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:54:01.945590 systemd-fsck[1237]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:54:01.949729 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:54:01.962576 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:54:02.050270 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:54:02.050669 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:54:02.053773 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:02.075726 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:02.079527 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:54:02.084901 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:54:02.085011 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:54:02.112787 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1256) Jan 23 23:54:02.085072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:02.105107 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:54:02.124244 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:02.124345 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:02.124375 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:02.123561 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:54:02.152633 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:02.154699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:02.454724 initrd-setup-root[1280]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:54:02.476471 initrd-setup-root[1287]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:54:02.485760 initrd-setup-root[1294]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:54:02.494712 initrd-setup-root[1301]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:54:02.726447 systemd-networkd[1206]: eth0: Gained IPv6LL Jan 23 23:54:02.841556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:02.855602 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:54:02.863565 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:54:02.882057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:54:02.884958 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:02.930068 ignition[1369]: INFO : Ignition 2.19.0 Jan 23 23:54:02.930068 ignition[1369]: INFO : Stage: mount Jan 23 23:54:02.936330 ignition[1369]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:02.936330 ignition[1369]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:02.936330 ignition[1369]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:02.936330 ignition[1369]: INFO : PUT result: OK Jan 23 23:54:02.939367 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:54:02.955012 ignition[1369]: INFO : mount: mount passed Jan 23 23:54:02.956972 ignition[1369]: INFO : Ignition finished successfully Jan 23 23:54:02.961540 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:54:02.971445 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:54:03.057576 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:03.090273 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1380) Jan 23 23:54:03.094664 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:03.094719 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:03.094747 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:03.102262 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:03.105571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:03.150133 ignition[1397]: INFO : Ignition 2.19.0 Jan 23 23:54:03.152643 ignition[1397]: INFO : Stage: files Jan 23 23:54:03.152643 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:03.152643 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:03.152643 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:03.162554 ignition[1397]: INFO : PUT result: OK Jan 23 23:54:03.166578 ignition[1397]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:54:03.171900 ignition[1397]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:54:03.171900 ignition[1397]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:54:03.220204 ignition[1397]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:54:03.223622 ignition[1397]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:54:03.227411 unknown[1397]: wrote ssh authorized keys file for user: core Jan 23 23:54:03.231006 ignition[1397]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:54:03.234604 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:03.234604 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:54:03.330845 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:54:03.493194 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:03.497756 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:54:03.497756 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 23:54:03.712772 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:54:03.849267 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:54:03.849267 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:03.849267 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:03.849267 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:03.869570 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 23:54:04.122948 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 23:54:04.490330 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:04.490330 ignition[1397]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:04.501447 ignition[1397]: INFO : files: files passed Jan 23 23:54:04.501447 ignition[1397]: INFO : Ignition finished successfully Jan 23 23:54:04.499791 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:54:04.526842 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:54:04.529483 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:54:04.555547 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:54:04.555773 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:54:04.589060 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:04.589060 initrd-setup-root-after-ignition[1425]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:04.597852 initrd-setup-root-after-ignition[1429]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:04.606363 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:04.610327 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:54:04.627437 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:54:04.679186 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:54:04.679997 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:54:04.691902 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:54:04.694503 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:54:04.697616 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:54:04.710686 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:54:04.743391 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:04.757709 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:54:04.787504 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:04.793084 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:04.796252 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:54:04.803844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:54:04.804170 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:04.817366 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:54:04.826805 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:54:04.831132 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:54:04.836122 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:04.841440 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:04.846948 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:54:04.849729 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:04.852812 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:54:04.855946 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:54:04.865758 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:54:04.868085 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:54:04.868360 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:04.879059 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:04.881687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:04.884858 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:54:04.891752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:04.894679 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:54:04.894914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:04.905390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:54:04.905673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:04.908793 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:54:04.909017 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:54:04.925684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:54:04.927972 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:54:04.928300 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:04.937386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:54:04.945384 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:54:04.946008 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:04.959545 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:54:04.959843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:04.977904 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:54:04.982016 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:54:04.999610 ignition[1449]: INFO : Ignition 2.19.0 Jan 23 23:54:05.003202 ignition[1449]: INFO : Stage: umount Jan 23 23:54:05.003202 ignition[1449]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:05.003202 ignition[1449]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:05.003202 ignition[1449]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:05.018389 ignition[1449]: INFO : PUT result: OK Jan 23 23:54:05.021558 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:54:05.025372 ignition[1449]: INFO : umount: umount passed Jan 23 23:54:05.025372 ignition[1449]: INFO : Ignition finished successfully Jan 23 23:54:05.026659 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:54:05.026902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:54:05.029696 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:54:05.029886 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:54:05.044889 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:54:05.045063 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:54:05.052876 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:54:05.054960 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:54:05.057463 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:54:05.057610 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:54:05.066866 systemd[1]: Stopped target network.target - Network. Jan 23 23:54:05.068929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:54:05.069038 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:05.073626 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:54:05.077958 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:54:05.085859 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:05.088830 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:54:05.090967 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:54:05.093282 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:54:05.093367 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:54:05.095711 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:54:05.095789 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:54:05.098264 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:54:05.098371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:54:05.100772 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:54:05.100858 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:54:05.106401 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:54:05.106488 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:05.110024 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:54:05.120403 systemd-networkd[1206]: eth0: DHCPv6 lease lost Jan 23 23:54:05.129481 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:54:05.136484 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:54:05.136747 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:54:05.155010 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:54:05.155266 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:54:05.168503 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:54:05.168644 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:05.184368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:54:05.189089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:54:05.189421 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:05.198869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:54:05.199948 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:05.206122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:54:05.206289 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:05.210661 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:54:05.210777 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:05.223849 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:05.250685 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:54:05.253576 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:05.260897 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:54:05.264654 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:05.269810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:54:05.269910 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:05.274545 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:54:05.274802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:05.284058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:54:05.284183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:54:05.286785 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:05.286902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:05.303509 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:54:05.310793 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:54:05.310963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:05.314425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:05.314554 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:05.328422 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:54:05.328854 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:54:05.356639 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:54:05.357044 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:54:05.366082 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:54:05.381684 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:54:05.397728 systemd[1]: Switching root. Jan 23 23:54:05.462275 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 23 23:54:05.462359 systemd-journald[252]: Journal stopped Jan 23 23:54:08.166578 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:54:08.166718 kernel: SELinux: policy capability open_perms=1 Jan 23 23:54:08.166755 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:54:08.166788 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:54:08.166819 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:54:08.166850 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:54:08.166885 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:54:08.166917 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:54:08.166945 kernel: audit: type=1403 audit(1769212446.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:54:08.166987 systemd[1]: Successfully loaded SELinux policy in 62.250ms. Jan 23 23:54:08.167041 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.874ms. Jan 23 23:54:08.167078 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:54:08.167118 systemd[1]: Detected virtualization amazon. Jan 23 23:54:08.167150 systemd[1]: Detected architecture arm64. Jan 23 23:54:08.167182 systemd[1]: Detected first boot. Jan 23 23:54:08.167259 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:54:08.167298 zram_generator::config[1491]: No configuration found. Jan 23 23:54:08.167332 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:54:08.167365 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:54:08.167397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:54:08.167433 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:08.167467 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:54:08.167500 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:54:08.167539 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:54:08.167573 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:54:08.167608 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:54:08.167663 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:54:08.167702 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:54:08.167738 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:54:08.167772 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:08.167806 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:08.167846 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:54:08.167885 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:54:08.167918 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:54:08.167952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:54:08.167986 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:54:08.168020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:08.168052 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:54:08.168083 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:54:08.168129 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:08.168168 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:54:08.168199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:08.168280 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:08.168318 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:54:08.168367 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:54:08.168399 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:54:08.168436 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:54:08.168469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:08.168506 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:08.168542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:08.168573 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:54:08.168616 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:54:08.168651 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:54:08.168682 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:54:08.168715 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:54:08.168749 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:54:08.168781 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:54:08.168821 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:54:08.168853 systemd[1]: Reached target machines.target - Containers. Jan 23 23:54:08.168885 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:54:08.168921 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:08.168955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:54:08.168986 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:54:08.169018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:08.169049 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:08.169083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:08.169116 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:54:08.169147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:08.169177 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:54:08.169208 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:54:08.175352 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:54:08.175402 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:54:08.175439 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:54:08.175470 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:54:08.175508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:54:08.175542 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:54:08.175575 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:54:08.175607 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:08.175664 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:54:08.175699 systemd[1]: Stopped verity-setup.service. Jan 23 23:54:08.175732 kernel: fuse: init (API version 7.39) Jan 23 23:54:08.175764 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:54:08.175795 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:54:08.175830 kernel: loop: module loaded Jan 23 23:54:08.175862 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:54:08.175895 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:54:08.175927 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:54:08.175959 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:54:08.175993 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:08.176029 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:54:08.176062 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:54:08.176093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:08.176123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:08.176153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:08.176185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:08.184116 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:54:08.184198 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:54:08.184270 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:08.184306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:08.184340 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:54:08.184373 kernel: ACPI: bus type drm_connector registered Jan 23 23:54:08.184410 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:54:08.184445 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:08.184477 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:08.184507 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:08.184540 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:54:08.184618 systemd-journald[1580]: Collecting audit messages is disabled. Jan 23 23:54:08.184681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:54:08.184720 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:54:08.184753 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:54:08.184783 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:08.184812 systemd-journald[1580]: Journal started Jan 23 23:54:08.184860 systemd-journald[1580]: Runtime Journal (/run/log/journal/ec247601b0aa6288aee9e0fa4bac0152) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:54:07.399649 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:54:07.462648 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:54:07.463661 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:54:08.203901 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:54:08.216449 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:54:08.234438 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:54:08.239280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:08.262773 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:54:08.269454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:08.284291 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:54:08.293324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:08.311190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:08.311434 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:54:08.327374 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:54:08.330107 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:54:08.333392 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:54:08.340168 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:54:08.345937 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:54:08.386812 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:54:08.427004 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:08.436049 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:54:08.454267 kernel: loop0: detected capacity change from 0 to 114328 Jan 23 23:54:08.451495 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:54:08.464031 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:54:08.479855 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:54:08.487801 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:54:08.536384 systemd-journald[1580]: Time spent on flushing to /var/log/journal/ec247601b0aa6288aee9e0fa4bac0152 is 113.994ms for 909 entries. Jan 23 23:54:08.536384 systemd-journald[1580]: System Journal (/var/log/journal/ec247601b0aa6288aee9e0fa4bac0152) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:54:08.673877 systemd-journald[1580]: Received client request to flush runtime journal. Jan 23 23:54:08.673970 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:54:08.674008 kernel: loop1: detected capacity change from 0 to 200800 Jan 23 23:54:08.540823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:08.576062 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:54:08.592557 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:54:08.600850 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:54:08.641359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:54:08.660654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:54:08.682451 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:54:08.721867 kernel: loop2: detected capacity change from 0 to 52536 Jan 23 23:54:08.737730 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 23 23:54:08.737778 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 23 23:54:08.761555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:08.859945 kernel: loop3: detected capacity change from 0 to 114432 Jan 23 23:54:08.971302 kernel: loop4: detected capacity change from 0 to 114328 Jan 23 23:54:08.994281 kernel: loop5: detected capacity change from 0 to 200800 Jan 23 23:54:09.027764 kernel: loop6: detected capacity change from 0 to 52536 Jan 23 23:54:09.042282 kernel: loop7: detected capacity change from 0 to 114432 Jan 23 23:54:09.055492 (sd-merge)[1646]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:54:09.057042 (sd-merge)[1646]: Merged extensions into '/usr'. Jan 23 23:54:09.065841 systemd[1]: Reloading requested from client PID 1602 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:54:09.066758 systemd[1]: Reloading... Jan 23 23:54:09.295302 zram_generator::config[1675]: No configuration found. Jan 23 23:54:09.660533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:09.788161 systemd[1]: Reloading finished in 720 ms. Jan 23 23:54:09.837115 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:54:09.841177 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:54:09.861621 systemd[1]: Starting ensure-sysext.service... Jan 23 23:54:09.868601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:54:09.886028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:09.926333 systemd[1]: Reloading requested from client PID 1724 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:54:09.926360 systemd[1]: Reloading... Jan 23 23:54:09.934473 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:54:09.935153 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:54:09.941193 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:54:09.944019 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Jan 23 23:54:09.944178 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Jan 23 23:54:09.956070 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:09.956422 systemd-tmpfiles[1725]: Skipping /boot Jan 23 23:54:09.980171 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:09.980305 systemd-tmpfiles[1725]: Skipping /boot Jan 23 23:54:10.025265 ldconfig[1598]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:54:10.040148 systemd-udevd[1726]: Using default interface naming scheme 'v255'. Jan 23 23:54:10.205148 zram_generator::config[1777]: No configuration found. Jan 23 23:54:10.329984 (udev-worker)[1776]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:10.629446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:10.691869 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1756) Jan 23 23:54:10.796125 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:54:10.799285 systemd[1]: Reloading finished in 872 ms. Jan 23 23:54:10.841465 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:10.846915 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:54:10.851790 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:10.968964 systemd[1]: Finished ensure-sysext.service. Jan 23 23:54:11.000662 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:11.010117 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:54:11.016885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:11.021653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:11.031554 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:11.037621 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:11.048676 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:11.051496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:11.055905 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:54:11.069673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:11.081602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:54:11.084475 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:54:11.092595 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:54:11.098261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:11.133673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:11.142654 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:54:11.160267 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:54:11.164524 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:54:11.184630 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:54:11.196560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:11.196978 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:11.208643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:11.209052 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:11.215853 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:11.222452 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:54:11.227529 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:11.229290 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:11.257122 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:11.259390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:11.268290 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:11.308631 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:54:11.326956 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:54:11.332372 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:54:11.362288 lvm[1943]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:11.397347 augenrules[1962]: No rules Jan 23 23:54:11.403321 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:11.428747 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:54:11.453939 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:54:11.461634 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:54:11.467594 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:11.483658 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:54:11.486875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:54:11.487194 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:54:11.526433 lvm[1975]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:11.577365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:11.588266 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:54:11.657585 systemd-networkd[1930]: lo: Link UP Jan 23 23:54:11.658132 systemd-networkd[1930]: lo: Gained carrier Jan 23 23:54:11.661705 systemd-networkd[1930]: Enumeration completed Jan 23 23:54:11.662156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:11.668411 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:11.669150 systemd-resolved[1931]: Positive Trust Anchors: Jan 23 23:54:11.670319 systemd-networkd[1930]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:11.671308 systemd-resolved[1931]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:54:11.671391 systemd-resolved[1931]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:54:11.673346 systemd-networkd[1930]: eth0: Link UP Jan 23 23:54:11.673962 systemd-networkd[1930]: eth0: Gained carrier Jan 23 23:54:11.674148 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:11.675141 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:54:11.696404 systemd-networkd[1930]: eth0: DHCPv4 address 172.31.20.17/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:11.716893 systemd-resolved[1931]: Defaulting to hostname 'linux'. Jan 23 23:54:11.720817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:54:11.724106 systemd[1]: Reached target network.target - Network. Jan 23 23:54:11.726767 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:11.729868 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:11.732699 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:54:11.735859 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:54:11.739444 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:54:11.742792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:54:11.745927 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:54:11.749495 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:54:11.749569 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:54:11.752008 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:54:11.755765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:54:11.761910 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:54:11.774372 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:54:11.778458 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:54:11.781452 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:54:11.784056 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:11.786613 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:11.786675 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:11.789330 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:54:11.796682 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:54:11.810751 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:54:11.818562 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:54:11.834189 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:54:11.836868 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:54:11.842622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:54:11.850610 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:54:11.862040 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:54:11.870641 jq[1989]: false Jan 23 23:54:11.871052 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:54:11.892571 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:54:11.899988 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:54:11.916639 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:54:11.926291 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:54:11.927404 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:54:11.931596 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:54:11.970261 extend-filesystems[1990]: Found loop4 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found loop5 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found loop6 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found loop7 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1p1 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1p2 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1p3 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found usr Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1p4 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1p6 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1p7 Jan 23 23:54:11.970261 extend-filesystems[1990]: Found nvme0n1p9 Jan 23 23:54:11.970261 extend-filesystems[1990]: Checking size of /dev/nvme0n1p9 Jan 23 23:54:12.000451 dbus-daemon[1988]: [system] SELinux support is enabled Jan 23 23:54:12.007173 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:54:12.019302 dbus-daemon[1988]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1930 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:12.016762 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:54:12.048118 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:54:12.073025 jq[2007]: true Jan 23 23:54:12.051747 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:54:12.077027 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:54:12.077115 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:54:12.080452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:54:12.080499 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:54:12.105114 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:54:12.132651 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:54:12.161327 extend-filesystems[1990]: Resized partition /dev/nvme0n1p9 Jan 23 23:54:12.164510 extend-filesystems[2024]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:54:12.181682 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:54:12.190307 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:54:12.184556 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:54:12.217317 tar[2012]: linux-arm64/LICENSE Jan 23 23:54:12.234762 tar[2012]: linux-arm64/helm Jan 23 23:54:12.273012 update_engine[1999]: I20260123 23:54:12.267764 1999 main.cc:92] Flatcar Update Engine starting Jan 23 23:54:12.279380 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: corporation. Support and training for ntp-4 are Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: available at https://www.nwtime.org/support Jan 23 23:54:12.290563 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:12.308956 jq[2013]: true Jan 23 23:54:12.279458 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:12.324616 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:54:12.279481 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:12.279501 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:12.279521 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:12.279540 ntpd[1992]: corporation. Support and training for ntp-4 are Jan 23 23:54:12.279560 ntpd[1992]: available at https://www.nwtime.org/support Jan 23 23:54:12.279580 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:12.335527 update_engine[1999]: I20260123 23:54:12.330507 1999 update_check_scheduler.cc:74] Next update check in 11m22s Jan 23 23:54:12.336041 ntpd[1992]: proto: precision = 0.096 usec (-23) Jan 23 23:54:12.338060 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: proto: precision = 0.096 usec (-23) Jan 23 23:54:12.340376 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:54:12.348822 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: basedate set to 2026-01-11 Jan 23 23:54:12.348822 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:12.345116 ntpd[1992]: basedate set to 2026-01-11 Jan 23 23:54:12.345173 ntpd[1992]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:12.372380 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:54:12.373318 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:54:12.378943 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:12.389426 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:12.389426 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:12.389426 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:12.389426 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Listen normally on 3 eth0 172.31.20.17:123 Jan 23 23:54:12.389426 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:12.389426 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: bind(21) AF_INET6 fe80::490:a1ff:fe3e:21ab%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:12.382601 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:12.383159 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:12.388839 ntpd[1992]: Listen normally on 3 eth0 172.31.20.17:123 Jan 23 23:54:12.388979 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:12.389126 ntpd[1992]: bind(21) AF_INET6 fe80::490:a1ff:fe3e:21ab%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:12.389191 ntpd[1992]: unable to create socket on eth0 (5) for fe80::490:a1ff:fe3e:21ab%2#123 Jan 23 23:54:12.394210 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: unable to create socket on eth0 (5) for fe80::490:a1ff:fe3e:21ab%2#123 Jan 23 23:54:12.394210 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: failed to init interface for address fe80::490:a1ff:fe3e:21ab%2 Jan 23 23:54:12.393787 ntpd[1992]: failed to init interface for address fe80::490:a1ff:fe3e:21ab%2 Jan 23 23:54:12.401162 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:12.395341 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:12.421848 (ntainerd)[2035]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:54:12.464154 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:12.464154 ntpd[1992]: 23 Jan 23:54:12 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:12.462406 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:12.462486 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:12.471131 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:54:12.516837 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:54:12.544327 extend-filesystems[2024]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:54:12.544327 extend-filesystems[2024]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:54:12.544327 extend-filesystems[2024]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:54:12.576537 extend-filesystems[1990]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:54:12.583804 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:54:12.584571 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:54:12.599558 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:54:12.637260 coreos-metadata[1987]: Jan 23 23:54:12.636 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:12.648474 coreos-metadata[1987]: Jan 23 23:54:12.644 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:54:12.648474 coreos-metadata[1987]: Jan 23 23:54:12.648 INFO Fetch successful Jan 23 23:54:12.648474 coreos-metadata[1987]: Jan 23 23:54:12.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:54:12.651618 coreos-metadata[1987]: Jan 23 23:54:12.649 INFO Fetch successful Jan 23 23:54:12.651618 coreos-metadata[1987]: Jan 23 23:54:12.649 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:54:12.654447 coreos-metadata[1987]: Jan 23 23:54:12.653 INFO Fetch successful Jan 23 23:54:12.654447 coreos-metadata[1987]: Jan 23 23:54:12.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:54:12.660198 coreos-metadata[1987]: Jan 23 23:54:12.657 INFO Fetch successful Jan 23 23:54:12.660198 coreos-metadata[1987]: Jan 23 23:54:12.658 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:54:12.660801 coreos-metadata[1987]: Jan 23 23:54:12.660 INFO Fetch failed with 404: resource not found Jan 23 23:54:12.664433 coreos-metadata[1987]: Jan 23 23:54:12.663 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:54:12.664433 coreos-metadata[1987]: Jan 23 23:54:12.664 INFO Fetch successful Jan 23 23:54:12.665584 coreos-metadata[1987]: Jan 23 23:54:12.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:54:12.669598 coreos-metadata[1987]: Jan 23 23:54:12.668 INFO Fetch successful Jan 23 23:54:12.669598 coreos-metadata[1987]: Jan 23 23:54:12.668 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:54:12.671117 coreos-metadata[1987]: Jan 23 23:54:12.671 INFO Fetch successful Jan 23 23:54:12.673381 coreos-metadata[1987]: Jan 23 23:54:12.671 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:54:12.686284 coreos-metadata[1987]: Jan 23 23:54:12.684 INFO Fetch successful Jan 23 23:54:12.686284 coreos-metadata[1987]: Jan 23 23:54:12.684 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:54:12.703268 coreos-metadata[1987]: Jan 23 23:54:12.701 INFO Fetch successful Jan 23 23:54:12.717258 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1756) Jan 23 23:54:12.775480 bash[2069]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:12.789087 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:54:12.823528 systemd[1]: Starting sshkeys.service... Jan 23 23:54:12.931063 systemd-logind[1998]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:54:12.931114 systemd-logind[1998]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:54:12.938796 systemd-logind[1998]: New seat seat0. Jan 23 23:54:12.955853 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:54:12.962159 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:54:12.971325 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:54:12.993892 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:54:12.997021 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:54:13.185275 coreos-metadata[2117]: Jan 23 23:54:13.184 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:13.190894 coreos-metadata[2117]: Jan 23 23:54:13.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:54:13.190894 coreos-metadata[2117]: Jan 23 23:54:13.190 INFO Fetch successful Jan 23 23:54:13.190894 coreos-metadata[2117]: Jan 23 23:54:13.190 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:54:13.195957 coreos-metadata[2117]: Jan 23 23:54:13.193 INFO Fetch successful Jan 23 23:54:13.202723 unknown[2117]: wrote ssh authorized keys file for user: core Jan 23 23:54:13.263407 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:54:13.278996 dbus-daemon[1988]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2019 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:13.268943 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:54:13.298991 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:54:13.304150 ntpd[1992]: bind(24) AF_INET6 fe80::490:a1ff:fe3e:21ab%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:13.307459 ntpd[1992]: 23 Jan 23:54:13 ntpd[1992]: bind(24) AF_INET6 fe80::490:a1ff:fe3e:21ab%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:13.307459 ntpd[1992]: 23 Jan 23:54:13 ntpd[1992]: unable to create socket on eth0 (6) for fe80::490:a1ff:fe3e:21ab%2#123 Jan 23 23:54:13.307459 ntpd[1992]: 23 Jan 23:54:13 ntpd[1992]: failed to init interface for address fe80::490:a1ff:fe3e:21ab%2 Jan 23 23:54:13.304266 ntpd[1992]: unable to create socket on eth0 (6) for fe80::490:a1ff:fe3e:21ab%2#123 Jan 23 23:54:13.304302 ntpd[1992]: failed to init interface for address fe80::490:a1ff:fe3e:21ab%2 Jan 23 23:54:13.329323 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:13.339557 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:54:13.351543 systemd[1]: Finished sshkeys.service. Jan 23 23:54:13.415355 systemd-networkd[1930]: eth0: Gained IPv6LL Jan 23 23:54:13.452971 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:54:13.458621 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:54:13.469628 polkitd[2159]: Started polkitd version 121 Jan 23 23:54:13.478012 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:54:13.501641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:13.514401 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:54:13.543807 polkitd[2159]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:54:13.543950 polkitd[2159]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:54:13.555788 polkitd[2159]: Finished loading, compiling and executing 2 rules Jan 23 23:54:13.562486 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:54:13.564587 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:54:13.568192 polkitd[2159]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:54:13.619260 containerd[2035]: time="2026-01-23T23:54:13.607727220Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:54:13.650554 systemd-hostnamed[2019]: Hostname set to (transient) Jan 23 23:54:13.650684 systemd-resolved[1931]: System hostname changed to 'ip-172-31-20-17'. Jan 23 23:54:13.672318 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:54:13.698862 amazon-ssm-agent[2172]: Initializing new seelog logger Jan 23 23:54:13.700931 amazon-ssm-agent[2172]: New Seelog Logger Creation Complete Jan 23 23:54:13.700931 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.700931 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.705255 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 processing appconfig overrides Jan 23 23:54:13.705255 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.705255 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.705255 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 processing appconfig overrides Jan 23 23:54:13.705255 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.705255 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.705569 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 processing appconfig overrides Jan 23 23:54:13.709813 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO Proxy environment variables: Jan 23 23:54:13.716266 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.716266 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:13.716266 amazon-ssm-agent[2172]: 2026/01/23 23:54:13 processing appconfig overrides Jan 23 23:54:13.798037 containerd[2035]: time="2026-01-23T23:54:13.797894953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:13.805627 containerd[2035]: time="2026-01-23T23:54:13.805518037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:13.805627 containerd[2035]: time="2026-01-23T23:54:13.805602817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:54:13.805842 containerd[2035]: time="2026-01-23T23:54:13.805643125Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.805982245Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806036713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806165845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806197021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806537509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806571613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806602489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806627149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807148 containerd[2035]: time="2026-01-23T23:54:13.806787937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:13.807782 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO http_proxy: Jan 23 23:54:13.812710 containerd[2035]: time="2026-01-23T23:54:13.807211345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:13.812839 containerd[2035]: time="2026-01-23T23:54:13.812766577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:13.812839 containerd[2035]: time="2026-01-23T23:54:13.812805997Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:54:13.813201 containerd[2035]: time="2026-01-23T23:54:13.813046429Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:54:13.813201 containerd[2035]: time="2026-01-23T23:54:13.813182977Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:54:13.824487 containerd[2035]: time="2026-01-23T23:54:13.824407778Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:54:13.824618 containerd[2035]: time="2026-01-23T23:54:13.824532410Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:54:13.824698 containerd[2035]: time="2026-01-23T23:54:13.824651726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:54:13.824757 containerd[2035]: time="2026-01-23T23:54:13.824710346Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:54:13.824847 containerd[2035]: time="2026-01-23T23:54:13.824768774Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.825046370Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826340030Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826621130Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826655774Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826698074Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826734050Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826765046Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826795622Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826826858Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826859030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826889534Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826925402Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826953110Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:54:13.827325 containerd[2035]: time="2026-01-23T23:54:13.826993070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.828035 containerd[2035]: time="2026-01-23T23:54:13.827039774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.828035 containerd[2035]: time="2026-01-23T23:54:13.827070794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.828035 containerd[2035]: time="2026-01-23T23:54:13.827102498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.828035 containerd[2035]: time="2026-01-23T23:54:13.827136290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.828035 containerd[2035]: time="2026-01-23T23:54:13.827168342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.827197310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.830748422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.830795234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.830833526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.830868926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.830905766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.830938154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.831041678Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.831098198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.831128954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.831157514Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:54:13.832279 containerd[2035]: time="2026-01-23T23:54:13.831448154Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:54:13.832928 containerd[2035]: time="2026-01-23T23:54:13.832850606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:54:13.832928 containerd[2035]: time="2026-01-23T23:54:13.832908590Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:54:13.833057 containerd[2035]: time="2026-01-23T23:54:13.832949378Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:54:13.833057 containerd[2035]: time="2026-01-23T23:54:13.832986830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.833057 containerd[2035]: time="2026-01-23T23:54:13.833033762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:54:13.833186 containerd[2035]: time="2026-01-23T23:54:13.833061038Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:54:13.833186 containerd[2035]: time="2026-01-23T23:54:13.833090798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:54:13.852519 containerd[2035]: time="2026-01-23T23:54:13.851801582Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:54:13.854522 containerd[2035]: time="2026-01-23T23:54:13.852723350Z" level=info msg="Connect containerd service" Jan 23 23:54:13.854522 containerd[2035]: time="2026-01-23T23:54:13.852836786Z" level=info msg="using legacy CRI server" Jan 23 23:54:13.854522 containerd[2035]: time="2026-01-23T23:54:13.852859154Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:54:13.854522 containerd[2035]: time="2026-01-23T23:54:13.853091390Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:54:13.856917 containerd[2035]: time="2026-01-23T23:54:13.856835294Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:54:13.859345 containerd[2035]: time="2026-01-23T23:54:13.859153166Z" level=info msg="Start subscribing containerd event" Jan 23 23:54:13.859345 containerd[2035]: time="2026-01-23T23:54:13.859287710Z" level=info msg="Start recovering state" Jan 23 23:54:13.859551 containerd[2035]: time="2026-01-23T23:54:13.859426946Z" level=info msg="Start event monitor" Jan 23 23:54:13.859551 containerd[2035]: time="2026-01-23T23:54:13.859453910Z" level=info msg="Start snapshots syncer" Jan 23 23:54:13.859551 containerd[2035]: time="2026-01-23T23:54:13.859477454Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:54:13.859551 containerd[2035]: time="2026-01-23T23:54:13.859497698Z" level=info msg="Start streaming server" Jan 23 23:54:13.865624 containerd[2035]: time="2026-01-23T23:54:13.861425102Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:54:13.865624 containerd[2035]: time="2026-01-23T23:54:13.861553874Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:54:13.861791 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:54:13.870496 containerd[2035]: time="2026-01-23T23:54:13.870411218Z" level=info msg="containerd successfully booted in 0.288192s" Jan 23 23:54:13.886305 locksmithd[2038]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:54:13.908716 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO no_proxy: Jan 23 23:54:14.011129 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO https_proxy: Jan 23 23:54:14.108039 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:54:14.207358 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:54:14.308035 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO Agent will take identity from EC2 Jan 23 23:54:14.407451 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:14.506731 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:14.545105 sshd_keygen[2004]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:54:14.608316 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:14.662510 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:54:14.679969 tar[2012]: linux-arm64/README.md Jan 23 23:54:14.676804 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:54:14.685857 systemd[1]: Started sshd@0-172.31.20.17:22-4.153.228.146:57928.service - OpenSSH per-connection server daemon (4.153.228.146:57928). Jan 23 23:54:14.705575 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:54:14.733976 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:54:14.742253 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:54:14.742689 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:54:14.759795 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:54:14.807357 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:54:14.820753 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:54:14.835768 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:54:14.845945 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:54:14.849091 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:54:14.907767 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:54:15.008010 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:54:15.109014 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [Registrar] Starting registrar module Jan 23 23:54:15.162648 amazon-ssm-agent[2172]: 2026-01-23 23:54:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:54:15.162648 amazon-ssm-agent[2172]: 2026-01-23 23:54:15 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:54:15.162648 amazon-ssm-agent[2172]: 2026-01-23 23:54:15 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:54:15.162648 amazon-ssm-agent[2172]: 2026-01-23 23:54:15 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:54:15.162648 amazon-ssm-agent[2172]: 2026-01-23 23:54:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:54:15.209648 amazon-ssm-agent[2172]: 2026-01-23 23:54:15 INFO [CredentialRefresher] Next credential rotation will be in 32.46663579536666 minutes Jan 23 23:54:15.296840 sshd[2220]: Accepted publickey for core from 4.153.228.146 port 57928 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:15.300868 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:15.323135 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:54:15.334805 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:54:15.345723 systemd-logind[1998]: New session 1 of user core. Jan 23 23:54:15.373680 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:54:15.393934 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:54:15.410684 (systemd)[2234]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:54:15.537726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:15.543545 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:54:15.560892 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:15.690176 systemd[2234]: Queued start job for default target default.target. Jan 23 23:54:15.697780 systemd[2234]: Created slice app.slice - User Application Slice. Jan 23 23:54:15.697857 systemd[2234]: Reached target paths.target - Paths. Jan 23 23:54:15.697892 systemd[2234]: Reached target timers.target - Timers. Jan 23 23:54:15.705723 systemd[2234]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:54:15.728057 systemd[2234]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:54:15.728195 systemd[2234]: Reached target sockets.target - Sockets. Jan 23 23:54:15.728310 systemd[2234]: Reached target basic.target - Basic System. Jan 23 23:54:15.728412 systemd[2234]: Reached target default.target - Main User Target. Jan 23 23:54:15.728475 systemd[2234]: Startup finished in 303ms. Jan 23 23:54:15.729168 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:54:15.741533 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:54:15.747175 systemd[1]: Startup finished in 1.242s (kernel) + 9.202s (initrd) + 9.775s (userspace) = 20.219s. Jan 23 23:54:16.145884 systemd[1]: Started sshd@1-172.31.20.17:22-4.153.228.146:54662.service - OpenSSH per-connection server daemon (4.153.228.146:54662). Jan 23 23:54:16.194858 amazon-ssm-agent[2172]: 2026-01-23 23:54:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:54:16.296977 amazon-ssm-agent[2172]: 2026-01-23 23:54:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2261) started Jan 23 23:54:16.304866 ntpd[1992]: Listen normally on 7 eth0 [fe80::490:a1ff:fe3e:21ab%2]:123 Jan 23 23:54:16.305339 ntpd[1992]: 23 Jan 23:54:16 ntpd[1992]: Listen normally on 7 eth0 [fe80::490:a1ff:fe3e:21ab%2]:123 Jan 23 23:54:16.397728 amazon-ssm-agent[2172]: 2026-01-23 23:54:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:54:16.486359 kubelet[2245]: E0123 23:54:16.486291 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:16.491682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:16.492466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:16.493113 systemd[1]: kubelet.service: Consumed 1.334s CPU time. Jan 23 23:54:16.714844 sshd[2259]: Accepted publickey for core from 4.153.228.146 port 54662 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:16.717264 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:16.724909 systemd-logind[1998]: New session 2 of user core. Jan 23 23:54:16.736566 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:54:17.102693 sshd[2259]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:17.109467 systemd[1]: sshd@1-172.31.20.17:22-4.153.228.146:54662.service: Deactivated successfully. Jan 23 23:54:17.112681 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:54:17.114112 systemd-logind[1998]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:54:17.116071 systemd-logind[1998]: Removed session 2. Jan 23 23:54:17.189729 systemd[1]: Started sshd@2-172.31.20.17:22-4.153.228.146:54676.service - OpenSSH per-connection server daemon (4.153.228.146:54676). Jan 23 23:54:17.688058 sshd[2279]: Accepted publickey for core from 4.153.228.146 port 54676 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:17.690890 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:17.700581 systemd-logind[1998]: New session 3 of user core. Jan 23 23:54:17.707487 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:54:18.034717 sshd[2279]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:18.042192 systemd[1]: sshd@2-172.31.20.17:22-4.153.228.146:54676.service: Deactivated successfully. Jan 23 23:54:18.046715 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:54:18.048686 systemd-logind[1998]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:54:18.051189 systemd-logind[1998]: Removed session 3. Jan 23 23:54:18.145776 systemd[1]: Started sshd@3-172.31.20.17:22-4.153.228.146:54686.service - OpenSSH per-connection server daemon (4.153.228.146:54686). Jan 23 23:54:18.673612 sshd[2286]: Accepted publickey for core from 4.153.228.146 port 54686 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:18.676921 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:18.685172 systemd-logind[1998]: New session 4 of user core. Jan 23 23:54:18.692539 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:54:19.052911 sshd[2286]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:19.059767 systemd[1]: sshd@3-172.31.20.17:22-4.153.228.146:54686.service: Deactivated successfully. Jan 23 23:54:19.063950 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:54:19.065191 systemd-logind[1998]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:54:19.067295 systemd-logind[1998]: Removed session 4. Jan 23 23:54:19.139802 systemd[1]: Started sshd@4-172.31.20.17:22-4.153.228.146:54696.service - OpenSSH per-connection server daemon (4.153.228.146:54696). Jan 23 23:54:18.868994 systemd-resolved[1931]: Clock change detected. Flushing caches. Jan 23 23:54:18.880077 systemd-journald[1580]: Time jumped backwards, rotating. Jan 23 23:54:19.204376 sshd[2293]: Accepted publickey for core from 4.153.228.146 port 54696 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:19.207125 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:19.214414 systemd-logind[1998]: New session 5 of user core. Jan 23 23:54:19.223018 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:54:19.497567 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:54:19.498303 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:19.518374 sudo[2297]: pam_unix(sudo:session): session closed for user root Jan 23 23:54:19.595855 sshd[2293]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:19.602869 systemd[1]: sshd@4-172.31.20.17:22-4.153.228.146:54696.service: Deactivated successfully. Jan 23 23:54:19.606112 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:54:19.607711 systemd-logind[1998]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:54:19.609956 systemd-logind[1998]: Removed session 5. Jan 23 23:54:19.708278 systemd[1]: Started sshd@5-172.31.20.17:22-4.153.228.146:54702.service - OpenSSH per-connection server daemon (4.153.228.146:54702). Jan 23 23:54:20.246419 sshd[2302]: Accepted publickey for core from 4.153.228.146 port 54702 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:20.249107 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:20.258096 systemd-logind[1998]: New session 6 of user core. Jan 23 23:54:20.268010 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:54:20.548101 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:54:20.548743 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:20.554875 sudo[2306]: pam_unix(sudo:session): session closed for user root Jan 23 23:54:20.564739 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:54:20.565419 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:20.590268 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:20.594522 auditctl[2309]: No rules Jan 23 23:54:20.595259 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:54:20.595626 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:20.607425 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:20.653726 augenrules[2327]: No rules Jan 23 23:54:20.657871 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:20.660821 sudo[2305]: pam_unix(sudo:session): session closed for user root Jan 23 23:54:20.745835 sshd[2302]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:20.751242 systemd[1]: sshd@5-172.31.20.17:22-4.153.228.146:54702.service: Deactivated successfully. Jan 23 23:54:20.754005 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:54:20.758670 systemd-logind[1998]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:54:20.760720 systemd-logind[1998]: Removed session 6. Jan 23 23:54:20.831285 systemd[1]: Started sshd@6-172.31.20.17:22-4.153.228.146:54718.service - OpenSSH per-connection server daemon (4.153.228.146:54718). Jan 23 23:54:21.328561 sshd[2335]: Accepted publickey for core from 4.153.228.146 port 54718 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:21.331250 sshd[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:21.339984 systemd-logind[1998]: New session 7 of user core. Jan 23 23:54:21.347039 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:54:21.605209 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:54:21.606743 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:22.438216 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:54:22.438360 (dockerd)[2353]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:54:22.962987 dockerd[2353]: time="2026-01-23T23:54:22.962887984Z" level=info msg="Starting up" Jan 23 23:54:23.174869 dockerd[2353]: time="2026-01-23T23:54:23.174808285Z" level=info msg="Loading containers: start." Jan 23 23:54:23.364796 kernel: Initializing XFRM netlink socket Jan 23 23:54:23.425188 (udev-worker)[2377]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:23.518361 systemd-networkd[1930]: docker0: Link UP Jan 23 23:54:23.541280 dockerd[2353]: time="2026-01-23T23:54:23.541218351Z" level=info msg="Loading containers: done." Jan 23 23:54:23.566118 dockerd[2353]: time="2026-01-23T23:54:23.565994823Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:54:23.567854 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck355853868-merged.mount: Deactivated successfully. Jan 23 23:54:23.571778 dockerd[2353]: time="2026-01-23T23:54:23.571686063Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:54:23.572097 dockerd[2353]: time="2026-01-23T23:54:23.572002863Z" level=info msg="Daemon has completed initialization" Jan 23 23:54:23.640777 dockerd[2353]: time="2026-01-23T23:54:23.640522239Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:54:23.640832 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:54:24.779785 containerd[2035]: time="2026-01-23T23:54:24.779705825Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 23:54:25.460123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556632446.mount: Deactivated successfully. Jan 23 23:54:26.116822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:26.123105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:26.521311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:26.530312 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:26.616709 kubelet[2558]: E0123 23:54:26.616593 2558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:26.623254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:26.623600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:27.134874 containerd[2035]: time="2026-01-23T23:54:27.134790857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:27.137020 containerd[2035]: time="2026-01-23T23:54:27.136963241Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 23 23:54:27.140376 containerd[2035]: time="2026-01-23T23:54:27.140309297Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:27.155269 containerd[2035]: time="2026-01-23T23:54:27.155171693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:27.157888 containerd[2035]: time="2026-01-23T23:54:27.157321493Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.377536192s" Jan 23 23:54:27.157888 containerd[2035]: time="2026-01-23T23:54:27.157397225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 23:54:27.158363 containerd[2035]: time="2026-01-23T23:54:27.158324993Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 23:54:28.560919 containerd[2035]: time="2026-01-23T23:54:28.560851076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:28.563480 containerd[2035]: time="2026-01-23T23:54:28.563398136Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 23 23:54:28.565462 containerd[2035]: time="2026-01-23T23:54:28.565397768Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:28.571558 containerd[2035]: time="2026-01-23T23:54:28.571480508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:28.574104 containerd[2035]: time="2026-01-23T23:54:28.574039856Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.415191567s" Jan 23 23:54:28.574213 containerd[2035]: time="2026-01-23T23:54:28.574100852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 23:54:28.575826 containerd[2035]: time="2026-01-23T23:54:28.574625204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 23:54:29.727018 containerd[2035]: time="2026-01-23T23:54:29.726946197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:29.729094 containerd[2035]: time="2026-01-23T23:54:29.729041253Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 23 23:54:29.729801 containerd[2035]: time="2026-01-23T23:54:29.729530565Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:29.735515 containerd[2035]: time="2026-01-23T23:54:29.735428889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:29.737981 containerd[2035]: time="2026-01-23T23:54:29.737929653Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.163235473s" Jan 23 23:54:29.738233 containerd[2035]: time="2026-01-23T23:54:29.738092805Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 23:54:29.739590 containerd[2035]: time="2026-01-23T23:54:29.739066365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 23:54:31.050159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566976564.mount: Deactivated successfully. Jan 23 23:54:31.442599 containerd[2035]: time="2026-01-23T23:54:31.442511122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:31.445654 containerd[2035]: time="2026-01-23T23:54:31.445588126Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 23 23:54:31.447276 containerd[2035]: time="2026-01-23T23:54:31.447209374Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:31.459463 containerd[2035]: time="2026-01-23T23:54:31.459344026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:31.461686 containerd[2035]: time="2026-01-23T23:54:31.461589214Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.722447237s" Jan 23 23:54:31.461686 containerd[2035]: time="2026-01-23T23:54:31.461673634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 23:54:31.464807 containerd[2035]: time="2026-01-23T23:54:31.463827454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 23:54:31.978945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470196640.mount: Deactivated successfully. Jan 23 23:54:33.107985 containerd[2035]: time="2026-01-23T23:54:33.107901442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.110480 containerd[2035]: time="2026-01-23T23:54:33.110418262Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 23 23:54:33.114401 containerd[2035]: time="2026-01-23T23:54:33.113454574Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.122122 containerd[2035]: time="2026-01-23T23:54:33.122049022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.124909 containerd[2035]: time="2026-01-23T23:54:33.124850482Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.660947752s" Jan 23 23:54:33.125073 containerd[2035]: time="2026-01-23T23:54:33.125042974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 23:54:33.126526 containerd[2035]: time="2026-01-23T23:54:33.126452326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 23:54:33.612702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830887604.mount: Deactivated successfully. Jan 23 23:54:33.626823 containerd[2035]: time="2026-01-23T23:54:33.626578405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.629776 containerd[2035]: time="2026-01-23T23:54:33.629693341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 23 23:54:33.632491 containerd[2035]: time="2026-01-23T23:54:33.632428045Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.637191 containerd[2035]: time="2026-01-23T23:54:33.637071061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.638917 containerd[2035]: time="2026-01-23T23:54:33.638700613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 512.184507ms" Jan 23 23:54:33.638917 containerd[2035]: time="2026-01-23T23:54:33.638783113Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 23:54:33.639800 containerd[2035]: time="2026-01-23T23:54:33.639564973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 23:54:34.180214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676837448.mount: Deactivated successfully. Jan 23 23:54:36.867970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:54:36.879039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:37.292437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:37.306943 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:37.405404 kubelet[2697]: E0123 23:54:37.405330 2697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:37.411370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:37.412322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:37.967424 containerd[2035]: time="2026-01-23T23:54:37.965243550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.968467 containerd[2035]: time="2026-01-23T23:54:37.968405502Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 23 23:54:37.973820 containerd[2035]: time="2026-01-23T23:54:37.971821650Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.978358 containerd[2035]: time="2026-01-23T23:54:37.978301434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.982435 containerd[2035]: time="2026-01-23T23:54:37.982291038Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.342664229s" Jan 23 23:54:37.982637 containerd[2035]: time="2026-01-23T23:54:37.982593558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 23:54:43.255467 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:54:46.790656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:46.802309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:46.863707 systemd[1]: Reloading requested from client PID 2738 ('systemctl') (unit session-7.scope)... Jan 23 23:54:46.864044 systemd[1]: Reloading... Jan 23 23:54:47.138810 zram_generator::config[2781]: No configuration found. Jan 23 23:54:47.368132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:47.541776 systemd[1]: Reloading finished in 676 ms. Jan 23 23:54:47.638686 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:54:47.638925 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:54:47.639432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:47.645596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:47.957120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:47.971300 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:54:48.048890 kubelet[2841]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:54:48.048890 kubelet[2841]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:54:48.049424 kubelet[2841]: I0123 23:54:48.048995 2841 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:54:49.687610 kubelet[2841]: I0123 23:54:49.687530 2841 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:54:49.687610 kubelet[2841]: I0123 23:54:49.687584 2841 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:54:49.688532 kubelet[2841]: I0123 23:54:49.687637 2841 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:54:49.688532 kubelet[2841]: I0123 23:54:49.687652 2841 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:54:49.688532 kubelet[2841]: I0123 23:54:49.688131 2841 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:54:49.699324 kubelet[2841]: E0123 23:54:49.699273 2841 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:54:49.701231 kubelet[2841]: I0123 23:54:49.700992 2841 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:54:49.707521 kubelet[2841]: E0123 23:54:49.707435 2841 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:54:49.707673 kubelet[2841]: I0123 23:54:49.707567 2841 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:54:49.717436 kubelet[2841]: I0123 23:54:49.716932 2841 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:54:49.717436 kubelet[2841]: I0123 23:54:49.717386 2841 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:54:49.717693 kubelet[2841]: I0123 23:54:49.717431 2841 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-17","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:54:49.717908 kubelet[2841]: I0123 23:54:49.717693 2841 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:54:49.717908 kubelet[2841]: I0123 23:54:49.717713 2841 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:54:49.718010 kubelet[2841]: I0123 23:54:49.717918 2841 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:54:49.723782 kubelet[2841]: I0123 23:54:49.723718 2841 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:49.726317 kubelet[2841]: I0123 23:54:49.726249 2841 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:54:49.726317 kubelet[2841]: I0123 23:54:49.726294 2841 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:54:49.726478 kubelet[2841]: I0123 23:54:49.726337 2841 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:54:49.726478 kubelet[2841]: I0123 23:54:49.726359 2841 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:54:49.730805 kubelet[2841]: E0123 23:54:49.729170 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:54:49.730805 kubelet[2841]: E0123 23:54:49.729381 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-17&limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:54:49.730805 kubelet[2841]: I0123 23:54:49.730106 2841 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:54:49.731678 kubelet[2841]: I0123 23:54:49.731499 2841 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:54:49.731678 kubelet[2841]: I0123 23:54:49.731568 2841 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:54:49.731678 kubelet[2841]: W0123 23:54:49.731647 2841 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:54:49.736681 kubelet[2841]: I0123 23:54:49.736636 2841 server.go:1262] "Started kubelet" Jan 23 23:54:49.741411 kubelet[2841]: I0123 23:54:49.741356 2841 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:54:49.743166 kubelet[2841]: I0123 23:54:49.743072 2841 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:54:49.743277 kubelet[2841]: I0123 23:54:49.743177 2841 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:54:49.743386 kubelet[2841]: I0123 23:54:49.743366 2841 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:54:49.743727 kubelet[2841]: I0123 23:54:49.743682 2841 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:54:49.746551 kubelet[2841]: I0123 23:54:49.746508 2841 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:54:49.752159 kubelet[2841]: E0123 23:54:49.749854 2841 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.17:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-17.188d8162beeef8c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-17,UID:ip-172-31-20-17,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-17,},FirstTimestamp:2026-01-23 23:54:49.736583369 +0000 UTC m=+1.759080370,LastTimestamp:2026-01-23 23:54:49.736583369 +0000 UTC m=+1.759080370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-17,}" Jan 23 23:54:49.755816 kubelet[2841]: I0123 23:54:49.754899 2841 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:54:49.760253 kubelet[2841]: I0123 23:54:49.760219 2841 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:54:49.761219 kubelet[2841]: I0123 23:54:49.760501 2841 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:54:49.761219 kubelet[2841]: E0123 23:54:49.760712 2841 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-17\" not found" Jan 23 23:54:49.761411 kubelet[2841]: I0123 23:54:49.761339 2841 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:54:49.762552 kubelet[2841]: I0123 23:54:49.762491 2841 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:54:49.763990 kubelet[2841]: E0123 23:54:49.763932 2841 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:54:49.764847 kubelet[2841]: E0123 23:54:49.764807 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:54:49.765329 kubelet[2841]: E0123 23:54:49.765182 2841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-17?timeout=10s\": dial tcp 172.31.20.17:6443: connect: connection refused" interval="200ms" Jan 23 23:54:49.765859 kubelet[2841]: I0123 23:54:49.765829 2841 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:54:49.766007 kubelet[2841]: I0123 23:54:49.765989 2841 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:54:49.790051 kubelet[2841]: I0123 23:54:49.789370 2841 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:54:49.790051 kubelet[2841]: I0123 23:54:49.789405 2841 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:54:49.790051 kubelet[2841]: I0123 23:54:49.789467 2841 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:49.795319 kubelet[2841]: I0123 23:54:49.795045 2841 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:54:49.798136 kubelet[2841]: I0123 23:54:49.798079 2841 policy_none.go:49] "None policy: Start" Jan 23 23:54:49.798136 kubelet[2841]: I0123 23:54:49.798127 2841 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:54:49.798308 kubelet[2841]: I0123 23:54:49.798154 2841 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:54:49.799579 kubelet[2841]: I0123 23:54:49.799241 2841 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:54:49.799579 kubelet[2841]: I0123 23:54:49.799280 2841 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:54:49.799579 kubelet[2841]: I0123 23:54:49.799324 2841 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:54:49.801136 kubelet[2841]: E0123 23:54:49.799959 2841 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:54:49.802971 kubelet[2841]: I0123 23:54:49.802925 2841 policy_none.go:47] "Start" Jan 23 23:54:49.815082 kubelet[2841]: E0123 23:54:49.813159 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:54:49.818597 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:54:49.838320 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:54:49.845419 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:54:49.860732 kubelet[2841]: E0123 23:54:49.860402 2841 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:54:49.860732 kubelet[2841]: I0123 23:54:49.860727 2841 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:54:49.860732 kubelet[2841]: I0123 23:54:49.860747 2841 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:54:49.860732 kubelet[2841]: I0123 23:54:49.861154 2841 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:54:49.864811 kubelet[2841]: E0123 23:54:49.864418 2841 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:54:49.864811 kubelet[2841]: E0123 23:54:49.864561 2841 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-17\" not found" Jan 23 23:54:49.922501 systemd[1]: Created slice kubepods-burstable-pod5af65be8808eee169d6489a3ec6f3365.slice - libcontainer container kubepods-burstable-pod5af65be8808eee169d6489a3ec6f3365.slice. Jan 23 23:54:49.940878 kubelet[2841]: E0123 23:54:49.938549 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:49.947547 systemd[1]: Created slice kubepods-burstable-pod240bdf0b20a4cd2c9b28ba752e1b762a.slice - libcontainer container kubepods-burstable-pod240bdf0b20a4cd2c9b28ba752e1b762a.slice. Jan 23 23:54:49.959809 kubelet[2841]: E0123 23:54:49.959416 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:49.963056 kubelet[2841]: I0123 23:54:49.963016 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5af65be8808eee169d6489a3ec6f3365-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-17\" (UID: \"5af65be8808eee169d6489a3ec6f3365\") " pod="kube-system/kube-scheduler-ip-172-31-20-17" Jan 23 23:54:49.963391 kubelet[2841]: I0123 23:54:49.963268 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/240bdf0b20a4cd2c9b28ba752e1b762a-ca-certs\") pod \"kube-apiserver-ip-172-31-20-17\" (UID: \"240bdf0b20a4cd2c9b28ba752e1b762a\") " pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:54:49.963391 kubelet[2841]: I0123 23:54:49.963348 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:54:49.964359 kubelet[2841]: I0123 23:54:49.963843 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:54:49.964677 kubelet[2841]: I0123 23:54:49.964518 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/240bdf0b20a4cd2c9b28ba752e1b762a-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-17\" (UID: \"240bdf0b20a4cd2c9b28ba752e1b762a\") " pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:54:49.964677 kubelet[2841]: I0123 23:54:49.964620 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/240bdf0b20a4cd2c9b28ba752e1b762a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-17\" (UID: \"240bdf0b20a4cd2c9b28ba752e1b762a\") " pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:54:49.964677 kubelet[2841]: I0123 23:54:49.964669 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:54:49.964923 kubelet[2841]: I0123 23:54:49.964729 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:54:49.964923 kubelet[2841]: I0123 23:54:49.964807 2841 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:54:49.965733 kubelet[2841]: I0123 23:54:49.965424 2841 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-17" Jan 23 23:54:49.967161 kubelet[2841]: E0123 23:54:49.967042 2841 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.17:6443/api/v1/nodes\": dial tcp 172.31.20.17:6443: connect: connection refused" node="ip-172-31-20-17" Jan 23 23:54:49.968859 kubelet[2841]: E0123 23:54:49.967471 2841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-17?timeout=10s\": dial tcp 172.31.20.17:6443: connect: connection refused" interval="400ms" Jan 23 23:54:49.968544 systemd[1]: Created slice kubepods-burstable-pode4a9ba786d5e430251bda774029c35f0.slice - libcontainer container kubepods-burstable-pode4a9ba786d5e430251bda774029c35f0.slice. Jan 23 23:54:49.972195 kubelet[2841]: E0123 23:54:49.972144 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:50.169106 kubelet[2841]: I0123 23:54:50.169068 2841 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-17" Jan 23 23:54:50.170199 kubelet[2841]: E0123 23:54:50.170154 2841 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.17:6443/api/v1/nodes\": dial tcp 172.31.20.17:6443: connect: connection refused" node="ip-172-31-20-17" Jan 23 23:54:50.245447 containerd[2035]: time="2026-01-23T23:54:50.245315163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-17,Uid:5af65be8808eee169d6489a3ec6f3365,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:50.265456 containerd[2035]: time="2026-01-23T23:54:50.265371591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-17,Uid:240bdf0b20a4cd2c9b28ba752e1b762a,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:50.279619 containerd[2035]: time="2026-01-23T23:54:50.279229804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-17,Uid:e4a9ba786d5e430251bda774029c35f0,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:50.368112 kubelet[2841]: E0123 23:54:50.368051 2841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-17?timeout=10s\": dial tcp 172.31.20.17:6443: connect: connection refused" interval="800ms" Jan 23 23:54:50.574411 kubelet[2841]: I0123 23:54:50.573507 2841 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-17" Jan 23 23:54:50.574818 kubelet[2841]: E0123 23:54:50.574710 2841 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.17:6443/api/v1/nodes\": dial tcp 172.31.20.17:6443: connect: connection refused" node="ip-172-31-20-17" Jan 23 23:54:50.720735 kubelet[2841]: E0123 23:54:50.720655 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-17&limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:54:50.756730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210561148.mount: Deactivated successfully. Jan 23 23:54:50.773412 containerd[2035]: time="2026-01-23T23:54:50.773327946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:50.775526 containerd[2035]: time="2026-01-23T23:54:50.775449018Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:50.777800 containerd[2035]: time="2026-01-23T23:54:50.777505782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:54:50.779542 containerd[2035]: time="2026-01-23T23:54:50.779488578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:54:50.781794 containerd[2035]: time="2026-01-23T23:54:50.781691766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:50.784799 containerd[2035]: time="2026-01-23T23:54:50.784664046Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:50.786453 containerd[2035]: time="2026-01-23T23:54:50.786337686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:54:50.791035 containerd[2035]: time="2026-01-23T23:54:50.790941018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:50.795227 containerd[2035]: time="2026-01-23T23:54:50.794924718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.435515ms" Jan 23 23:54:50.799215 containerd[2035]: time="2026-01-23T23:54:50.799133358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.959739ms" Jan 23 23:54:50.814806 containerd[2035]: time="2026-01-23T23:54:50.814492374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.137974ms" Jan 23 23:54:50.988165 containerd[2035]: time="2026-01-23T23:54:50.987991639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:50.988433 containerd[2035]: time="2026-01-23T23:54:50.988286623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:50.988505 containerd[2035]: time="2026-01-23T23:54:50.988384255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:50.989904 containerd[2035]: time="2026-01-23T23:54:50.989602747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:51.000245 containerd[2035]: time="2026-01-23T23:54:50.998604283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:51.000245 containerd[2035]: time="2026-01-23T23:54:50.998719339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:51.000245 containerd[2035]: time="2026-01-23T23:54:50.998793307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:51.000245 containerd[2035]: time="2026-01-23T23:54:50.999804547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:51.004320 containerd[2035]: time="2026-01-23T23:54:51.003386991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:51.004320 containerd[2035]: time="2026-01-23T23:54:51.003810375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:51.004320 containerd[2035]: time="2026-01-23T23:54:51.003899391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:51.005272 containerd[2035]: time="2026-01-23T23:54:51.004949091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:51.036100 kubelet[2841]: E0123 23:54:51.035384 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:54:51.046790 kubelet[2841]: E0123 23:54:51.045690 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:54:51.051123 systemd[1]: Started cri-containerd-068d607168c6f71e429158530f45515c7b6ce95ddd649b609399cecd0954183a.scope - libcontainer container 068d607168c6f71e429158530f45515c7b6ce95ddd649b609399cecd0954183a. Jan 23 23:54:51.055367 systemd[1]: Started cri-containerd-a139ffdc5c1cdda0ddb38e7b3c06777da5de1366b327a7dd2c005bcbf7141862.scope - libcontainer container a139ffdc5c1cdda0ddb38e7b3c06777da5de1366b327a7dd2c005bcbf7141862. Jan 23 23:54:51.071203 systemd[1]: Started cri-containerd-a69287d3df9b887f5d2f62278f9b6ae5e3c0e2361fece950f76121b8a5cd1d95.scope - libcontainer container a69287d3df9b887f5d2f62278f9b6ae5e3c0e2361fece950f76121b8a5cd1d95. Jan 23 23:54:51.169984 kubelet[2841]: E0123 23:54:51.169894 2841 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-17?timeout=10s\": dial tcp 172.31.20.17:6443: connect: connection refused" interval="1.6s" Jan 23 23:54:51.194155 containerd[2035]: time="2026-01-23T23:54:51.194083816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-17,Uid:240bdf0b20a4cd2c9b28ba752e1b762a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a139ffdc5c1cdda0ddb38e7b3c06777da5de1366b327a7dd2c005bcbf7141862\"" Jan 23 23:54:51.211499 containerd[2035]: time="2026-01-23T23:54:51.211167148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-17,Uid:5af65be8808eee169d6489a3ec6f3365,Namespace:kube-system,Attempt:0,} returns sandbox id \"a69287d3df9b887f5d2f62278f9b6ae5e3c0e2361fece950f76121b8a5cd1d95\"" Jan 23 23:54:51.214683 containerd[2035]: time="2026-01-23T23:54:51.214426348Z" level=info msg="CreateContainer within sandbox \"a139ffdc5c1cdda0ddb38e7b3c06777da5de1366b327a7dd2c005bcbf7141862\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:54:51.221684 containerd[2035]: time="2026-01-23T23:54:51.221620672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-17,Uid:e4a9ba786d5e430251bda774029c35f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"068d607168c6f71e429158530f45515c7b6ce95ddd649b609399cecd0954183a\"" Jan 23 23:54:51.224784 containerd[2035]: time="2026-01-23T23:54:51.224595328Z" level=info msg="CreateContainer within sandbox \"a69287d3df9b887f5d2f62278f9b6ae5e3c0e2361fece950f76121b8a5cd1d95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:54:51.234196 containerd[2035]: time="2026-01-23T23:54:51.233895928Z" level=info msg="CreateContainer within sandbox \"068d607168c6f71e429158530f45515c7b6ce95ddd649b609399cecd0954183a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:54:51.240516 kubelet[2841]: E0123 23:54:51.239385 2841 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:54:51.258042 containerd[2035]: time="2026-01-23T23:54:51.257968456Z" level=info msg="CreateContainer within sandbox \"a139ffdc5c1cdda0ddb38e7b3c06777da5de1366b327a7dd2c005bcbf7141862\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77ba72f57ce40fe3e2eeeffa4eaf5210ebf1b8704ad85f1aaf9c3d6e3484be5c\"" Jan 23 23:54:51.260907 containerd[2035]: time="2026-01-23T23:54:51.260044288Z" level=info msg="StartContainer for \"77ba72f57ce40fe3e2eeeffa4eaf5210ebf1b8704ad85f1aaf9c3d6e3484be5c\"" Jan 23 23:54:51.270967 containerd[2035]: time="2026-01-23T23:54:51.270904180Z" level=info msg="CreateContainer within sandbox \"a69287d3df9b887f5d2f62278f9b6ae5e3c0e2361fece950f76121b8a5cd1d95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961\"" Jan 23 23:54:51.272144 containerd[2035]: time="2026-01-23T23:54:51.272101672Z" level=info msg="StartContainer for \"7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961\"" Jan 23 23:54:51.279109 containerd[2035]: time="2026-01-23T23:54:51.279029380Z" level=info msg="CreateContainer within sandbox \"068d607168c6f71e429158530f45515c7b6ce95ddd649b609399cecd0954183a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c\"" Jan 23 23:54:51.280845 containerd[2035]: time="2026-01-23T23:54:51.280750192Z" level=info msg="StartContainer for \"ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c\"" Jan 23 23:54:51.340150 systemd[1]: Started cri-containerd-77ba72f57ce40fe3e2eeeffa4eaf5210ebf1b8704ad85f1aaf9c3d6e3484be5c.scope - libcontainer container 77ba72f57ce40fe3e2eeeffa4eaf5210ebf1b8704ad85f1aaf9c3d6e3484be5c. Jan 23 23:54:51.355393 systemd[1]: Started cri-containerd-7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961.scope - libcontainer container 7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961. Jan 23 23:54:51.369139 systemd[1]: Started cri-containerd-ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c.scope - libcontainer container ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c. Jan 23 23:54:51.379240 kubelet[2841]: I0123 23:54:51.379160 2841 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-17" Jan 23 23:54:51.381228 kubelet[2841]: E0123 23:54:51.381155 2841 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.17:6443/api/v1/nodes\": dial tcp 172.31.20.17:6443: connect: connection refused" node="ip-172-31-20-17" Jan 23 23:54:51.460586 containerd[2035]: time="2026-01-23T23:54:51.460133921Z" level=info msg="StartContainer for \"77ba72f57ce40fe3e2eeeffa4eaf5210ebf1b8704ad85f1aaf9c3d6e3484be5c\" returns successfully" Jan 23 23:54:51.534505 containerd[2035]: time="2026-01-23T23:54:51.534054642Z" level=info msg="StartContainer for \"ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c\" returns successfully" Jan 23 23:54:51.546507 containerd[2035]: time="2026-01-23T23:54:51.546409494Z" level=info msg="StartContainer for \"7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961\" returns successfully" Jan 23 23:54:51.840617 kubelet[2841]: E0123 23:54:51.840161 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:51.847959 kubelet[2841]: E0123 23:54:51.846986 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:51.850786 kubelet[2841]: E0123 23:54:51.850304 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:52.853673 kubelet[2841]: E0123 23:54:52.853600 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:52.857822 kubelet[2841]: E0123 23:54:52.855810 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:52.984796 kubelet[2841]: I0123 23:54:52.984290 2841 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-17" Jan 23 23:54:53.857619 kubelet[2841]: E0123 23:54:53.857545 2841 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:54.557960 kubelet[2841]: E0123 23:54:54.557892 2841 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-17\" not found" node="ip-172-31-20-17" Jan 23 23:54:54.731009 kubelet[2841]: I0123 23:54:54.730940 2841 apiserver.go:52] "Watching apiserver" Jan 23 23:54:54.735607 kubelet[2841]: I0123 23:54:54.735522 2841 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-17" Jan 23 23:54:54.735797 kubelet[2841]: E0123 23:54:54.735609 2841 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-17\": node \"ip-172-31-20-17\" not found" Jan 23 23:54:54.761702 kubelet[2841]: I0123 23:54:54.761634 2841 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:54:54.764785 kubelet[2841]: I0123 23:54:54.762738 2841 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-17" Jan 23 23:54:54.833707 kubelet[2841]: E0123 23:54:54.832975 2841 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-17\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-17" Jan 23 23:54:54.833707 kubelet[2841]: I0123 23:54:54.833071 2841 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:54:54.840283 kubelet[2841]: E0123 23:54:54.839857 2841 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-17\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:54:54.840283 kubelet[2841]: I0123 23:54:54.839915 2841 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:54:54.859163 kubelet[2841]: E0123 23:54:54.859090 2841 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-17\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:54:56.742861 kubelet[2841]: I0123 23:54:56.742806 2841 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-17" Jan 23 23:54:57.097952 update_engine[1999]: I20260123 23:54:57.096741 1999 update_attempter.cc:509] Updating boot flags... Jan 23 23:54:57.219980 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3135) Jan 23 23:54:57.692178 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3137) Jan 23 23:54:57.778227 systemd[1]: Reloading requested from client PID 3276 ('systemctl') (unit session-7.scope)... Jan 23 23:54:57.778705 systemd[1]: Reloading... Jan 23 23:54:58.052808 zram_generator::config[3361]: No configuration found. Jan 23 23:54:58.257461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:58.496248 systemd[1]: Reloading finished in 716 ms. Jan 23 23:54:58.670558 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:58.700872 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:54:58.702080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:58.702183 systemd[1]: kubelet.service: Consumed 2.597s CPU time, 122.0M memory peak, 0B memory swap peak. Jan 23 23:54:58.714322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:59.138185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:59.157565 (kubelet)[3406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:54:59.310068 kubelet[3406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:54:59.310068 kubelet[3406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:54:59.310617 kubelet[3406]: I0123 23:54:59.310065 3406 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:54:59.349840 kubelet[3406]: I0123 23:54:59.348569 3406 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:54:59.349840 kubelet[3406]: I0123 23:54:59.348625 3406 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:54:59.349840 kubelet[3406]: I0123 23:54:59.348701 3406 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:54:59.349840 kubelet[3406]: I0123 23:54:59.348718 3406 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:54:59.351171 kubelet[3406]: I0123 23:54:59.351120 3406 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:54:59.357259 kubelet[3406]: I0123 23:54:59.357188 3406 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:54:59.366480 kubelet[3406]: I0123 23:54:59.366410 3406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:54:59.381160 kubelet[3406]: E0123 23:54:59.379606 3406 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:54:59.381160 kubelet[3406]: I0123 23:54:59.379717 3406 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:54:59.383432 sudo[3421]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 23:54:59.384363 sudo[3421]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 23:54:59.397721 kubelet[3406]: I0123 23:54:59.396299 3406 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:54:59.402386 kubelet[3406]: I0123 23:54:59.398999 3406 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:54:59.402386 kubelet[3406]: I0123 23:54:59.399086 3406 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-17","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:54:59.402386 kubelet[3406]: I0123 23:54:59.399366 3406 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:54:59.402386 kubelet[3406]: I0123 23:54:59.399386 3406 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:54:59.402849 kubelet[3406]: I0123 23:54:59.399442 3406 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:54:59.404792 kubelet[3406]: I0123 23:54:59.404631 3406 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:59.409840 kubelet[3406]: I0123 23:54:59.405851 3406 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:54:59.410135 kubelet[3406]: I0123 23:54:59.410082 3406 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:54:59.410376 kubelet[3406]: I0123 23:54:59.410330 3406 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:54:59.411875 kubelet[3406]: I0123 23:54:59.411836 3406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:54:59.422803 kubelet[3406]: I0123 23:54:59.421699 3406 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:54:59.427694 kubelet[3406]: I0123 23:54:59.427628 3406 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:54:59.429354 kubelet[3406]: I0123 23:54:59.428204 3406 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:54:59.454084 kubelet[3406]: I0123 23:54:59.453400 3406 server.go:1262] "Started kubelet" Jan 23 23:54:59.458413 kubelet[3406]: I0123 23:54:59.458339 3406 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:54:59.460467 kubelet[3406]: I0123 23:54:59.454460 3406 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:54:59.460467 kubelet[3406]: I0123 23:54:59.460388 3406 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:54:59.461644 kubelet[3406]: I0123 23:54:59.461376 3406 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:54:59.498788 kubelet[3406]: I0123 23:54:59.496430 3406 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:54:59.517617 kubelet[3406]: I0123 23:54:59.517576 3406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:54:59.528228 kubelet[3406]: I0123 23:54:59.528168 3406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:54:59.553118 kubelet[3406]: I0123 23:54:59.531899 3406 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:54:59.554629 kubelet[3406]: I0123 23:54:59.531926 3406 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:54:59.555256 kubelet[3406]: E0123 23:54:59.532151 3406 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-20-17\" not found" Jan 23 23:54:59.556592 kubelet[3406]: I0123 23:54:59.556270 3406 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:54:59.576623 kubelet[3406]: I0123 23:54:59.575745 3406 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:54:59.576623 kubelet[3406]: I0123 23:54:59.575941 3406 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:54:59.582888 kubelet[3406]: E0123 23:54:59.582660 3406 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:54:59.588706 kubelet[3406]: I0123 23:54:59.588567 3406 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:54:59.664384 kubelet[3406]: I0123 23:54:59.663651 3406 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:54:59.714563 kubelet[3406]: I0123 23:54:59.714431 3406 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:54:59.715393 kubelet[3406]: I0123 23:54:59.714503 3406 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:54:59.717054 kubelet[3406]: I0123 23:54:59.715688 3406 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:54:59.717054 kubelet[3406]: E0123 23:54:59.715811 3406 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:54:59.813589 kubelet[3406]: I0123 23:54:59.813552 3406 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:54:59.814076 kubelet[3406]: I0123 23:54:59.813727 3406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:54:59.814076 kubelet[3406]: I0123 23:54:59.814002 3406 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:59.816804 kubelet[3406]: I0123 23:54:59.816109 3406 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:54:59.816804 kubelet[3406]: I0123 23:54:59.816150 3406 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:54:59.816804 kubelet[3406]: I0123 23:54:59.816183 3406 policy_none.go:49] "None policy: Start" Jan 23 23:54:59.816804 kubelet[3406]: I0123 23:54:59.816204 3406 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:54:59.816804 kubelet[3406]: I0123 23:54:59.816227 3406 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:54:59.816804 kubelet[3406]: I0123 23:54:59.816467 3406 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 23:54:59.816804 kubelet[3406]: I0123 23:54:59.816485 3406 policy_none.go:47] "Start" Jan 23 23:54:59.817871 kubelet[3406]: E0123 23:54:59.817689 3406 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:54:59.832332 kubelet[3406]: E0123 23:54:59.831820 3406 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:54:59.837218 kubelet[3406]: I0123 23:54:59.836000 3406 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:54:59.837218 kubelet[3406]: I0123 23:54:59.836279 3406 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:54:59.839913 kubelet[3406]: I0123 23:54:59.839843 3406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:54:59.849670 kubelet[3406]: E0123 23:54:59.849214 3406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:54:59.963068 kubelet[3406]: I0123 23:54:59.962924 3406 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-17" Jan 23 23:54:59.981485 kubelet[3406]: I0123 23:54:59.980549 3406 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-17" Jan 23 23:54:59.981485 kubelet[3406]: I0123 23:54:59.980851 3406 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-17" Jan 23 23:55:00.025249 kubelet[3406]: I0123 23:55:00.021811 3406 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.025249 kubelet[3406]: I0123 23:55:00.024712 3406 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:55:00.025249 kubelet[3406]: I0123 23:55:00.024939 3406 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-17" Jan 23 23:55:00.057215 kubelet[3406]: E0123 23:55:00.057170 3406 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-17\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-17" Jan 23 23:55:00.065067 kubelet[3406]: I0123 23:55:00.064488 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5af65be8808eee169d6489a3ec6f3365-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-17\" (UID: \"5af65be8808eee169d6489a3ec6f3365\") " pod="kube-system/kube-scheduler-ip-172-31-20-17" Jan 23 23:55:00.065067 kubelet[3406]: I0123 23:55:00.064561 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/240bdf0b20a4cd2c9b28ba752e1b762a-ca-certs\") pod \"kube-apiserver-ip-172-31-20-17\" (UID: \"240bdf0b20a4cd2c9b28ba752e1b762a\") " pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:55:00.065067 kubelet[3406]: I0123 23:55:00.064603 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/240bdf0b20a4cd2c9b28ba752e1b762a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-17\" (UID: \"240bdf0b20a4cd2c9b28ba752e1b762a\") " pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:55:00.065067 kubelet[3406]: I0123 23:55:00.064642 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.065067 kubelet[3406]: I0123 23:55:00.064679 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/240bdf0b20a4cd2c9b28ba752e1b762a-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-17\" (UID: \"240bdf0b20a4cd2c9b28ba752e1b762a\") " pod="kube-system/kube-apiserver-ip-172-31-20-17" Jan 23 23:55:00.065418 kubelet[3406]: I0123 23:55:00.064712 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.065418 kubelet[3406]: I0123 23:55:00.064746 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.065418 kubelet[3406]: I0123 23:55:00.064813 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.065418 kubelet[3406]: I0123 23:55:00.064850 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4a9ba786d5e430251bda774029c35f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-17\" (UID: \"e4a9ba786d5e430251bda774029c35f0\") " pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.415896 kubelet[3406]: I0123 23:55:00.415846 3406 apiserver.go:52] "Watching apiserver" Jan 23 23:55:00.430314 sudo[3421]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:00.456097 kubelet[3406]: I0123 23:55:00.456016 3406 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:55:00.779207 kubelet[3406]: I0123 23:55:00.778579 3406 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.793187 kubelet[3406]: I0123 23:55:00.792903 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-17" podStartSLOduration=0.792831772 podStartE2EDuration="792.831772ms" podCreationTimestamp="2026-01-23 23:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:00.79054762 +0000 UTC m=+1.623419241" watchObservedRunningTime="2026-01-23 23:55:00.792831772 +0000 UTC m=+1.625703393" Jan 23 23:55:00.796524 kubelet[3406]: E0123 23:55:00.796484 3406 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-17\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-17" Jan 23 23:55:00.816103 kubelet[3406]: I0123 23:55:00.815897 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-17" podStartSLOduration=0.815875468 podStartE2EDuration="815.875468ms" podCreationTimestamp="2026-01-23 23:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:00.815661952 +0000 UTC m=+1.648533585" watchObservedRunningTime="2026-01-23 23:55:00.815875468 +0000 UTC m=+1.648747089" Jan 23 23:55:00.904826 kubelet[3406]: I0123 23:55:00.903076 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-17" podStartSLOduration=4.903053476 podStartE2EDuration="4.903053476s" podCreationTimestamp="2026-01-23 23:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:00.852280432 +0000 UTC m=+1.685152077" watchObservedRunningTime="2026-01-23 23:55:00.903053476 +0000 UTC m=+1.735925085" Jan 23 23:55:03.074569 sudo[2338]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:03.153097 sshd[2335]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:03.161293 systemd[1]: sshd@6-172.31.20.17:22-4.153.228.146:54718.service: Deactivated successfully. Jan 23 23:55:03.166830 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:55:03.168105 systemd[1]: session-7.scope: Consumed 12.427s CPU time, 153.0M memory peak, 0B memory swap peak. Jan 23 23:55:03.169576 systemd-logind[1998]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:55:03.175785 systemd-logind[1998]: Removed session 7. Jan 23 23:55:04.189345 kubelet[3406]: I0123 23:55:04.188744 3406 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:55:04.191799 containerd[2035]: time="2026-01-23T23:55:04.191618513Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:55:04.194325 kubelet[3406]: I0123 23:55:04.194243 3406 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:55:04.955304 systemd[1]: Created slice kubepods-besteffort-podfd3ebb0c_8b0a_4824_91ba_414ec40cb63b.slice - libcontainer container kubepods-besteffort-podfd3ebb0c_8b0a_4824_91ba_414ec40cb63b.slice. Jan 23 23:55:04.997800 kubelet[3406]: I0123 23:55:04.997093 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cni-path\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.997800 kubelet[3406]: I0123 23:55:04.997175 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-kernel\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.997800 kubelet[3406]: I0123 23:55:04.997221 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd3ebb0c-8b0a-4824-91ba-414ec40cb63b-kube-proxy\") pod \"kube-proxy-zt29p\" (UID: \"fd3ebb0c-8b0a-4824-91ba-414ec40cb63b\") " pod="kube-system/kube-proxy-zt29p" Jan 23 23:55:04.997800 kubelet[3406]: I0123 23:55:04.997258 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-bpf-maps\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.997800 kubelet[3406]: I0123 23:55:04.997299 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-cgroup\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.997800 kubelet[3406]: I0123 23:55:04.997337 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-lib-modules\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998350 kubelet[3406]: I0123 23:55:04.997375 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41554546-9c9d-4a11-9254-80ecf5595a66-clustermesh-secrets\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998350 kubelet[3406]: I0123 23:55:04.997439 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-config-path\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998350 kubelet[3406]: I0123 23:55:04.997474 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwtp\" (UniqueName: \"kubernetes.io/projected/fd3ebb0c-8b0a-4824-91ba-414ec40cb63b-kube-api-access-2rwtp\") pod \"kube-proxy-zt29p\" (UID: \"fd3ebb0c-8b0a-4824-91ba-414ec40cb63b\") " pod="kube-system/kube-proxy-zt29p" Jan 23 23:55:04.998350 kubelet[3406]: I0123 23:55:04.997509 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-run\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998350 kubelet[3406]: I0123 23:55:04.997550 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-hubble-tls\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998632 kubelet[3406]: I0123 23:55:04.997583 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd3ebb0c-8b0a-4824-91ba-414ec40cb63b-xtables-lock\") pod \"kube-proxy-zt29p\" (UID: \"fd3ebb0c-8b0a-4824-91ba-414ec40cb63b\") " pod="kube-system/kube-proxy-zt29p" Jan 23 23:55:04.998632 kubelet[3406]: I0123 23:55:04.997616 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-hostproc\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998632 kubelet[3406]: I0123 23:55:04.997650 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-etc-cni-netd\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998632 kubelet[3406]: I0123 23:55:04.997685 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-xtables-lock\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998632 kubelet[3406]: I0123 23:55:04.997720 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-net\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.998632 kubelet[3406]: I0123 23:55:04.997814 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4xd\" (UniqueName: \"kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-kube-api-access-pl4xd\") pod \"cilium-q97j2\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " pod="kube-system/cilium-q97j2" Jan 23 23:55:04.999012 kubelet[3406]: I0123 23:55:04.997869 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd3ebb0c-8b0a-4824-91ba-414ec40cb63b-lib-modules\") pod \"kube-proxy-zt29p\" (UID: \"fd3ebb0c-8b0a-4824-91ba-414ec40cb63b\") " pod="kube-system/kube-proxy-zt29p" Jan 23 23:55:05.001660 systemd[1]: Created slice kubepods-burstable-pod41554546_9c9d_4a11_9254_80ecf5595a66.slice - libcontainer container kubepods-burstable-pod41554546_9c9d_4a11_9254_80ecf5595a66.slice. Jan 23 23:55:05.277423 containerd[2035]: time="2026-01-23T23:55:05.277245390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zt29p,Uid:fd3ebb0c-8b0a-4824-91ba-414ec40cb63b,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:05.341259 containerd[2035]: time="2026-01-23T23:55:05.339823722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q97j2,Uid:41554546-9c9d-4a11-9254-80ecf5595a66,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:05.351867 containerd[2035]: time="2026-01-23T23:55:05.351225306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:05.351867 containerd[2035]: time="2026-01-23T23:55:05.351365982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:05.351867 containerd[2035]: time="2026-01-23T23:55:05.351409206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:05.351867 containerd[2035]: time="2026-01-23T23:55:05.351602202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:05.437877 systemd[1]: Started cri-containerd-cbf674b74489f381aeda33f830a8922bf60e72b67de92bfa25b55e50a6de4e1f.scope - libcontainer container cbf674b74489f381aeda33f830a8922bf60e72b67de92bfa25b55e50a6de4e1f. Jan 23 23:55:05.450586 kubelet[3406]: E0123 23:55:05.449393 3406 status_manager.go:1018] "Failed to get status for pod" err="pods \"cilium-operator-6f9c7c5859-xpk5l\" is forbidden: User \"system:node:ip-172-31-20-17\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-20-17' and this object" podUID="b81905ac-8b8b-4edb-ba7f-110cb96aec81" pod="kube-system/cilium-operator-6f9c7c5859-xpk5l" Jan 23 23:55:05.463594 systemd[1]: Created slice kubepods-besteffort-podb81905ac_8b8b_4edb_ba7f_110cb96aec81.slice - libcontainer container kubepods-besteffort-podb81905ac_8b8b_4edb_ba7f_110cb96aec81.slice. Jan 23 23:55:05.481350 containerd[2035]: time="2026-01-23T23:55:05.479966191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:05.482080 containerd[2035]: time="2026-01-23T23:55:05.481901623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:05.482080 containerd[2035]: time="2026-01-23T23:55:05.481974079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:05.483101 containerd[2035]: time="2026-01-23T23:55:05.482623135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:05.506174 kubelet[3406]: I0123 23:55:05.505990 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6c8p\" (UniqueName: \"kubernetes.io/projected/b81905ac-8b8b-4edb-ba7f-110cb96aec81-kube-api-access-g6c8p\") pod \"cilium-operator-6f9c7c5859-xpk5l\" (UID: \"b81905ac-8b8b-4edb-ba7f-110cb96aec81\") " pod="kube-system/cilium-operator-6f9c7c5859-xpk5l" Jan 23 23:55:05.507259 kubelet[3406]: I0123 23:55:05.507083 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b81905ac-8b8b-4edb-ba7f-110cb96aec81-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-xpk5l\" (UID: \"b81905ac-8b8b-4edb-ba7f-110cb96aec81\") " pod="kube-system/cilium-operator-6f9c7c5859-xpk5l" Jan 23 23:55:05.533136 systemd[1]: Started cri-containerd-5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482.scope - libcontainer container 5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482. Jan 23 23:55:05.559544 containerd[2035]: time="2026-01-23T23:55:05.559472455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zt29p,Uid:fd3ebb0c-8b0a-4824-91ba-414ec40cb63b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf674b74489f381aeda33f830a8922bf60e72b67de92bfa25b55e50a6de4e1f\"" Jan 23 23:55:05.577235 containerd[2035]: time="2026-01-23T23:55:05.577053524Z" level=info msg="CreateContainer within sandbox \"cbf674b74489f381aeda33f830a8922bf60e72b67de92bfa25b55e50a6de4e1f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:55:05.600046 containerd[2035]: time="2026-01-23T23:55:05.599814248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q97j2,Uid:41554546-9c9d-4a11-9254-80ecf5595a66,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\"" Jan 23 23:55:05.605465 containerd[2035]: time="2026-01-23T23:55:05.605111156Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 23:55:05.632944 containerd[2035]: time="2026-01-23T23:55:05.632859872Z" level=info msg="CreateContainer within sandbox \"cbf674b74489f381aeda33f830a8922bf60e72b67de92bfa25b55e50a6de4e1f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34eeb0221cd3a5684b85f7b04ebb750988c0200f30773e9d95bc9517b3a58f4d\"" Jan 23 23:55:05.633810 containerd[2035]: time="2026-01-23T23:55:05.633671264Z" level=info msg="StartContainer for \"34eeb0221cd3a5684b85f7b04ebb750988c0200f30773e9d95bc9517b3a58f4d\"" Jan 23 23:55:05.690892 systemd[1]: Started cri-containerd-34eeb0221cd3a5684b85f7b04ebb750988c0200f30773e9d95bc9517b3a58f4d.scope - libcontainer container 34eeb0221cd3a5684b85f7b04ebb750988c0200f30773e9d95bc9517b3a58f4d. Jan 23 23:55:05.762470 containerd[2035]: time="2026-01-23T23:55:05.762161996Z" level=info msg="StartContainer for \"34eeb0221cd3a5684b85f7b04ebb750988c0200f30773e9d95bc9517b3a58f4d\" returns successfully" Jan 23 23:55:05.775867 containerd[2035]: time="2026-01-23T23:55:05.775266512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-xpk5l,Uid:b81905ac-8b8b-4edb-ba7f-110cb96aec81,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:05.838215 containerd[2035]: time="2026-01-23T23:55:05.837539901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:05.838215 containerd[2035]: time="2026-01-23T23:55:05.837650589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:05.838215 containerd[2035]: time="2026-01-23T23:55:05.837688473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:05.840048 containerd[2035]: time="2026-01-23T23:55:05.839685957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:05.895127 systemd[1]: Started cri-containerd-3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557.scope - libcontainer container 3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557. Jan 23 23:55:05.949247 kubelet[3406]: I0123 23:55:05.949032 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zt29p" podStartSLOduration=1.948993369 podStartE2EDuration="1.948993369s" podCreationTimestamp="2026-01-23 23:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:05.947601141 +0000 UTC m=+6.780472786" watchObservedRunningTime="2026-01-23 23:55:05.948993369 +0000 UTC m=+6.781865014" Jan 23 23:55:06.040577 containerd[2035]: time="2026-01-23T23:55:06.040223874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-xpk5l,Uid:b81905ac-8b8b-4edb-ba7f-110cb96aec81,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\"" Jan 23 23:55:10.951586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987305120.mount: Deactivated successfully. Jan 23 23:55:13.929209 containerd[2035]: time="2026-01-23T23:55:13.929120789Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:13.931977 containerd[2035]: time="2026-01-23T23:55:13.931482545Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 23:55:13.931977 containerd[2035]: time="2026-01-23T23:55:13.931894025Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:13.936060 containerd[2035]: time="2026-01-23T23:55:13.935844281Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.330625785s" Jan 23 23:55:13.936060 containerd[2035]: time="2026-01-23T23:55:13.935916881Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 23:55:13.938741 containerd[2035]: time="2026-01-23T23:55:13.938679197Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 23:55:13.945866 containerd[2035]: time="2026-01-23T23:55:13.945649193Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:55:13.979797 containerd[2035]: time="2026-01-23T23:55:13.979583273Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\"" Jan 23 23:55:13.981451 containerd[2035]: time="2026-01-23T23:55:13.980515253Z" level=info msg="StartContainer for \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\"" Jan 23 23:55:14.035090 systemd[1]: Started cri-containerd-0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1.scope - libcontainer container 0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1. Jan 23 23:55:14.089620 containerd[2035]: time="2026-01-23T23:55:14.089553014Z" level=info msg="StartContainer for \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\" returns successfully" Jan 23 23:55:14.118229 systemd[1]: cri-containerd-0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1.scope: Deactivated successfully. Jan 23 23:55:14.961220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1-rootfs.mount: Deactivated successfully. Jan 23 23:55:15.102632 containerd[2035]: time="2026-01-23T23:55:15.102272979Z" level=info msg="shim disconnected" id=0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1 namespace=k8s.io Jan 23 23:55:15.102632 containerd[2035]: time="2026-01-23T23:55:15.102354579Z" level=warning msg="cleaning up after shim disconnected" id=0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1 namespace=k8s.io Jan 23 23:55:15.102632 containerd[2035]: time="2026-01-23T23:55:15.102374607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:15.841358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930825846.mount: Deactivated successfully. Jan 23 23:55:15.878508 containerd[2035]: time="2026-01-23T23:55:15.877357663Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:55:15.934845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268057918.mount: Deactivated successfully. Jan 23 23:55:15.944661 containerd[2035]: time="2026-01-23T23:55:15.944555815Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\"" Jan 23 23:55:15.947354 containerd[2035]: time="2026-01-23T23:55:15.947213215Z" level=info msg="StartContainer for \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\"" Jan 23 23:55:16.050298 systemd[1]: Started cri-containerd-6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035.scope - libcontainer container 6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035. Jan 23 23:55:16.158598 containerd[2035]: time="2026-01-23T23:55:16.158459296Z" level=info msg="StartContainer for \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\" returns successfully" Jan 23 23:55:16.198160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:55:16.198664 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:16.199737 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:16.209413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:16.213515 systemd[1]: cri-containerd-6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035.scope: Deactivated successfully. Jan 23 23:55:16.262065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:16.291245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035-rootfs.mount: Deactivated successfully. Jan 23 23:55:16.319562 containerd[2035]: time="2026-01-23T23:55:16.319233773Z" level=info msg="shim disconnected" id=6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035 namespace=k8s.io Jan 23 23:55:16.319562 containerd[2035]: time="2026-01-23T23:55:16.319313153Z" level=warning msg="cleaning up after shim disconnected" id=6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035 namespace=k8s.io Jan 23 23:55:16.319562 containerd[2035]: time="2026-01-23T23:55:16.319333013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:16.875876 containerd[2035]: time="2026-01-23T23:55:16.874978736Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.881658 containerd[2035]: time="2026-01-23T23:55:16.881568800Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 23:55:16.885828 containerd[2035]: time="2026-01-23T23:55:16.884813024Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.887736 containerd[2035]: time="2026-01-23T23:55:16.887666216Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:55:16.897821 containerd[2035]: time="2026-01-23T23:55:16.897699692Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.958693759s" Jan 23 23:55:16.897821 containerd[2035]: time="2026-01-23T23:55:16.897814496Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 23:55:16.915529 containerd[2035]: time="2026-01-23T23:55:16.915032024Z" level=info msg="CreateContainer within sandbox \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 23:55:16.935722 containerd[2035]: time="2026-01-23T23:55:16.934742684Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\"" Jan 23 23:55:16.936969 containerd[2035]: time="2026-01-23T23:55:16.935779220Z" level=info msg="StartContainer for \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\"" Jan 23 23:55:16.999831 containerd[2035]: time="2026-01-23T23:55:16.998855672Z" level=info msg="CreateContainer within sandbox \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\"" Jan 23 23:55:17.007635 containerd[2035]: time="2026-01-23T23:55:17.006573220Z" level=info msg="StartContainer for \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\"" Jan 23 23:55:17.035154 systemd[1]: Started cri-containerd-54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771.scope - libcontainer container 54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771. Jan 23 23:55:17.083263 systemd[1]: Started cri-containerd-798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db.scope - libcontainer container 798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db. Jan 23 23:55:17.154342 systemd[1]: cri-containerd-54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771.scope: Deactivated successfully. Jan 23 23:55:17.163795 containerd[2035]: time="2026-01-23T23:55:17.162993461Z" level=info msg="StartContainer for \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\" returns successfully" Jan 23 23:55:17.178432 containerd[2035]: time="2026-01-23T23:55:17.178238645Z" level=info msg="StartContainer for \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\" returns successfully" Jan 23 23:55:17.304333 containerd[2035]: time="2026-01-23T23:55:17.303936030Z" level=info msg="shim disconnected" id=54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771 namespace=k8s.io Jan 23 23:55:17.304333 containerd[2035]: time="2026-01-23T23:55:17.304020726Z" level=warning msg="cleaning up after shim disconnected" id=54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771 namespace=k8s.io Jan 23 23:55:17.304333 containerd[2035]: time="2026-01-23T23:55:17.304041654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:17.334832 containerd[2035]: time="2026-01-23T23:55:17.332856738Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:55:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:55:17.901864 containerd[2035]: time="2026-01-23T23:55:17.899596545Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:55:17.939797 containerd[2035]: time="2026-01-23T23:55:17.938273589Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\"" Jan 23 23:55:17.941061 containerd[2035]: time="2026-01-23T23:55:17.940969533Z" level=info msg="StartContainer for \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\"" Jan 23 23:55:17.970549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771-rootfs.mount: Deactivated successfully. Jan 23 23:55:18.069126 systemd[1]: Started cri-containerd-d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913.scope - libcontainer container d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913. Jan 23 23:55:18.196546 containerd[2035]: time="2026-01-23T23:55:18.195678522Z" level=info msg="StartContainer for \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\" returns successfully" Jan 23 23:55:18.199168 systemd[1]: cri-containerd-d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913.scope: Deactivated successfully. Jan 23 23:55:18.279521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913-rootfs.mount: Deactivated successfully. Jan 23 23:55:18.290448 containerd[2035]: time="2026-01-23T23:55:18.290349667Z" level=info msg="shim disconnected" id=d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913 namespace=k8s.io Jan 23 23:55:18.290448 containerd[2035]: time="2026-01-23T23:55:18.290442835Z" level=warning msg="cleaning up after shim disconnected" id=d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913 namespace=k8s.io Jan 23 23:55:18.290872 containerd[2035]: time="2026-01-23T23:55:18.290467435Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:18.298388 kubelet[3406]: I0123 23:55:18.298149 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-xpk5l" podStartSLOduration=2.444031377 podStartE2EDuration="13.298119931s" podCreationTimestamp="2026-01-23 23:55:05 +0000 UTC" firstStartedPulling="2026-01-23 23:55:06.047907486 +0000 UTC m=+6.880779107" lastFinishedPulling="2026-01-23 23:55:16.901996052 +0000 UTC m=+17.734867661" observedRunningTime="2026-01-23 23:55:18.026313305 +0000 UTC m=+18.859185034" watchObservedRunningTime="2026-01-23 23:55:18.298119931 +0000 UTC m=+19.130991552" Jan 23 23:55:18.921581 containerd[2035]: time="2026-01-23T23:55:18.920961874Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:55:18.968373 containerd[2035]: time="2026-01-23T23:55:18.968268178Z" level=info msg="CreateContainer within sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\"" Jan 23 23:55:18.970867 containerd[2035]: time="2026-01-23T23:55:18.970599838Z" level=info msg="StartContainer for \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\"" Jan 23 23:55:19.076106 systemd[1]: Started cri-containerd-ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee.scope - libcontainer container ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee. Jan 23 23:55:19.152851 containerd[2035]: time="2026-01-23T23:55:19.152649139Z" level=info msg="StartContainer for \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\" returns successfully" Jan 23 23:55:19.430446 kubelet[3406]: I0123 23:55:19.430366 3406 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 23:55:19.520176 systemd[1]: Created slice kubepods-burstable-pod04bc0e47_612b_4d68_8d55_c3c77f372f96.slice - libcontainer container kubepods-burstable-pod04bc0e47_612b_4d68_8d55_c3c77f372f96.slice. Jan 23 23:55:19.530320 kubelet[3406]: I0123 23:55:19.530245 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bc0e47-612b-4d68-8d55-c3c77f372f96-config-volume\") pod \"coredns-66bc5c9577-s7qvt\" (UID: \"04bc0e47-612b-4d68-8d55-c3c77f372f96\") " pod="kube-system/coredns-66bc5c9577-s7qvt" Jan 23 23:55:19.530320 kubelet[3406]: I0123 23:55:19.530327 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxmz\" (UniqueName: \"kubernetes.io/projected/04bc0e47-612b-4d68-8d55-c3c77f372f96-kube-api-access-lkxmz\") pod \"coredns-66bc5c9577-s7qvt\" (UID: \"04bc0e47-612b-4d68-8d55-c3c77f372f96\") " pod="kube-system/coredns-66bc5c9577-s7qvt" Jan 23 23:55:19.539657 systemd[1]: Created slice kubepods-burstable-pod1d144e68_771d_4a3e_948d_ac32d44c9451.slice - libcontainer container kubepods-burstable-pod1d144e68_771d_4a3e_948d_ac32d44c9451.slice. Jan 23 23:55:19.630952 kubelet[3406]: I0123 23:55:19.630890 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d144e68-771d-4a3e-948d-ac32d44c9451-config-volume\") pod \"coredns-66bc5c9577-bwnkx\" (UID: \"1d144e68-771d-4a3e-948d-ac32d44c9451\") " pod="kube-system/coredns-66bc5c9577-bwnkx" Jan 23 23:55:19.631154 kubelet[3406]: I0123 23:55:19.631026 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9m7\" (UniqueName: \"kubernetes.io/projected/1d144e68-771d-4a3e-948d-ac32d44c9451-kube-api-access-7h9m7\") pod \"coredns-66bc5c9577-bwnkx\" (UID: \"1d144e68-771d-4a3e-948d-ac32d44c9451\") " pod="kube-system/coredns-66bc5c9577-bwnkx" Jan 23 23:55:19.838094 containerd[2035]: time="2026-01-23T23:55:19.837325750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s7qvt,Uid:04bc0e47-612b-4d68-8d55-c3c77f372f96,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:19.858334 containerd[2035]: time="2026-01-23T23:55:19.857791258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bwnkx,Uid:1d144e68-771d-4a3e-948d-ac32d44c9451,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:22.595703 (udev-worker)[4219]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:22.596609 (udev-worker)[4217]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:22.602240 systemd-networkd[1930]: cilium_host: Link UP Jan 23 23:55:22.602665 systemd-networkd[1930]: cilium_net: Link UP Jan 23 23:55:22.603638 systemd-networkd[1930]: cilium_net: Gained carrier Jan 23 23:55:22.604082 systemd-networkd[1930]: cilium_host: Gained carrier Jan 23 23:55:22.732173 systemd-networkd[1930]: cilium_net: Gained IPv6LL Jan 23 23:55:22.788353 (udev-worker)[4259]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:22.817336 systemd-networkd[1930]: cilium_vxlan: Link UP Jan 23 23:55:22.817357 systemd-networkd[1930]: cilium_vxlan: Gained carrier Jan 23 23:55:23.084183 systemd-networkd[1930]: cilium_host: Gained IPv6LL Jan 23 23:55:23.414075 kernel: NET: Registered PF_ALG protocol family Jan 23 23:55:24.340159 systemd-networkd[1930]: cilium_vxlan: Gained IPv6LL Jan 23 23:55:24.836125 systemd-networkd[1930]: lxc_health: Link UP Jan 23 23:55:24.848937 systemd-networkd[1930]: lxc_health: Gained carrier Jan 23 23:55:25.373458 kubelet[3406]: I0123 23:55:25.372656 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q97j2" podStartSLOduration=13.038172665 podStartE2EDuration="21.372636338s" podCreationTimestamp="2026-01-23 23:55:04 +0000 UTC" firstStartedPulling="2026-01-23 23:55:05.603982568 +0000 UTC m=+6.436854189" lastFinishedPulling="2026-01-23 23:55:13.938446253 +0000 UTC m=+14.771317862" observedRunningTime="2026-01-23 23:55:20.040033123 +0000 UTC m=+20.872904768" watchObservedRunningTime="2026-01-23 23:55:25.372636338 +0000 UTC m=+26.205507959" Jan 23 23:55:25.474054 systemd-networkd[1930]: lxcc27b35ab165d: Link UP Jan 23 23:55:25.484198 kernel: eth0: renamed from tmp47f90 Jan 23 23:55:25.491615 systemd-networkd[1930]: lxcc27b35ab165d: Gained carrier Jan 23 23:55:25.522984 systemd-networkd[1930]: lxce0812047d473: Link UP Jan 23 23:55:25.531811 kernel: eth0: renamed from tmpb5e3b Jan 23 23:55:25.538782 systemd-networkd[1930]: lxce0812047d473: Gained carrier Jan 23 23:55:26.452972 systemd-networkd[1930]: lxc_health: Gained IPv6LL Jan 23 23:55:27.348012 systemd-networkd[1930]: lxcc27b35ab165d: Gained IPv6LL Jan 23 23:55:27.348496 systemd-networkd[1930]: lxce0812047d473: Gained IPv6LL Jan 23 23:55:29.869089 ntpd[1992]: Listen normally on 8 cilium_host 192.168.0.55:123 Jan 23 23:55:29.869242 ntpd[1992]: Listen normally on 9 cilium_net [fe80::457:7aff:fe78:d4e7%4]:123 Jan 23 23:55:29.869743 ntpd[1992]: 23 Jan 23:55:29 ntpd[1992]: Listen normally on 8 cilium_host 192.168.0.55:123 Jan 23 23:55:29.869743 ntpd[1992]: 23 Jan 23:55:29 ntpd[1992]: Listen normally on 9 cilium_net [fe80::457:7aff:fe78:d4e7%4]:123 Jan 23 23:55:29.869743 ntpd[1992]: 23 Jan 23:55:29 ntpd[1992]: Listen normally on 10 cilium_host [fe80::acc5:4bff:fe11:d4cf%5]:123 Jan 23 23:55:29.869743 ntpd[1992]: 23 Jan 23:55:29 ntpd[1992]: Listen normally on 11 cilium_vxlan [fe80::20d9:ccff:fef5:3af6%6]:123 Jan 23 23:55:29.869743 ntpd[1992]: 23 Jan 23:55:29 ntpd[1992]: Listen normally on 12 lxc_health [fe80::582c:afff:fe84:3dc8%8]:123 Jan 23 23:55:29.869743 ntpd[1992]: 23 Jan 23:55:29 ntpd[1992]: Listen normally on 13 lxcc27b35ab165d [fe80::2062:31ff:fe82:65d9%10]:123 Jan 23 23:55:29.869743 ntpd[1992]: 23 Jan 23:55:29 ntpd[1992]: Listen normally on 14 lxce0812047d473 [fe80::c862:f5ff:fe69:4e80%12]:123 Jan 23 23:55:29.869335 ntpd[1992]: Listen normally on 10 cilium_host [fe80::acc5:4bff:fe11:d4cf%5]:123 Jan 23 23:55:29.869404 ntpd[1992]: Listen normally on 11 cilium_vxlan [fe80::20d9:ccff:fef5:3af6%6]:123 Jan 23 23:55:29.869480 ntpd[1992]: Listen normally on 12 lxc_health [fe80::582c:afff:fe84:3dc8%8]:123 Jan 23 23:55:29.869550 ntpd[1992]: Listen normally on 13 lxcc27b35ab165d [fe80::2062:31ff:fe82:65d9%10]:123 Jan 23 23:55:29.869617 ntpd[1992]: Listen normally on 14 lxce0812047d473 [fe80::c862:f5ff:fe69:4e80%12]:123 Jan 23 23:55:33.967894 containerd[2035]: time="2026-01-23T23:55:33.966171925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:33.967894 containerd[2035]: time="2026-01-23T23:55:33.966290353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:33.967894 containerd[2035]: time="2026-01-23T23:55:33.966329137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:33.967894 containerd[2035]: time="2026-01-23T23:55:33.966488053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:34.030114 systemd[1]: Started cri-containerd-b5e3b11ce0c6a3f8cf9361b31e2ca3107eb6126d37c39a947ce89a014c5aac66.scope - libcontainer container b5e3b11ce0c6a3f8cf9361b31e2ca3107eb6126d37c39a947ce89a014c5aac66. Jan 23 23:55:34.087629 containerd[2035]: time="2026-01-23T23:55:34.087412629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:34.087629 containerd[2035]: time="2026-01-23T23:55:34.087545817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:34.087629 containerd[2035]: time="2026-01-23T23:55:34.087584613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:34.090440 containerd[2035]: time="2026-01-23T23:55:34.087947721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:34.175540 systemd[1]: run-containerd-runc-k8s.io-47f90ca93920b4547ecad08673117b09425a482bf1a9c65c8b642af10525967c-runc.7JpH57.mount: Deactivated successfully. Jan 23 23:55:34.181891 containerd[2035]: time="2026-01-23T23:55:34.179623630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bwnkx,Uid:1d144e68-771d-4a3e-948d-ac32d44c9451,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5e3b11ce0c6a3f8cf9361b31e2ca3107eb6126d37c39a947ce89a014c5aac66\"" Jan 23 23:55:34.196174 systemd[1]: Started cri-containerd-47f90ca93920b4547ecad08673117b09425a482bf1a9c65c8b642af10525967c.scope - libcontainer container 47f90ca93920b4547ecad08673117b09425a482bf1a9c65c8b642af10525967c. Jan 23 23:55:34.199279 containerd[2035]: time="2026-01-23T23:55:34.199010590Z" level=info msg="CreateContainer within sandbox \"b5e3b11ce0c6a3f8cf9361b31e2ca3107eb6126d37c39a947ce89a014c5aac66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:55:34.232321 containerd[2035]: time="2026-01-23T23:55:34.232053862Z" level=info msg="CreateContainer within sandbox \"b5e3b11ce0c6a3f8cf9361b31e2ca3107eb6126d37c39a947ce89a014c5aac66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77c3d7fd622278c4d82c85ca05938ef16a852d7f7f2d0e7c3d16f0abdfd5dee9\"" Jan 23 23:55:34.235842 containerd[2035]: time="2026-01-23T23:55:34.234202414Z" level=info msg="StartContainer for \"77c3d7fd622278c4d82c85ca05938ef16a852d7f7f2d0e7c3d16f0abdfd5dee9\"" Jan 23 23:55:34.302126 systemd[1]: Started cri-containerd-77c3d7fd622278c4d82c85ca05938ef16a852d7f7f2d0e7c3d16f0abdfd5dee9.scope - libcontainer container 77c3d7fd622278c4d82c85ca05938ef16a852d7f7f2d0e7c3d16f0abdfd5dee9. Jan 23 23:55:34.374873 containerd[2035]: time="2026-01-23T23:55:34.374414795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s7qvt,Uid:04bc0e47-612b-4d68-8d55-c3c77f372f96,Namespace:kube-system,Attempt:0,} returns sandbox id \"47f90ca93920b4547ecad08673117b09425a482bf1a9c65c8b642af10525967c\"" Jan 23 23:55:34.393641 containerd[2035]: time="2026-01-23T23:55:34.393585455Z" level=info msg="CreateContainer within sandbox \"47f90ca93920b4547ecad08673117b09425a482bf1a9c65c8b642af10525967c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:55:34.409995 containerd[2035]: time="2026-01-23T23:55:34.409933655Z" level=info msg="StartContainer for \"77c3d7fd622278c4d82c85ca05938ef16a852d7f7f2d0e7c3d16f0abdfd5dee9\" returns successfully" Jan 23 23:55:34.434056 containerd[2035]: time="2026-01-23T23:55:34.433978175Z" level=info msg="CreateContainer within sandbox \"47f90ca93920b4547ecad08673117b09425a482bf1a9c65c8b642af10525967c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6586a60eaeb8db788978494c2e443e871902cbb6dab0ef12acaeef93b74eb9e8\"" Jan 23 23:55:34.436839 containerd[2035]: time="2026-01-23T23:55:34.436065251Z" level=info msg="StartContainer for \"6586a60eaeb8db788978494c2e443e871902cbb6dab0ef12acaeef93b74eb9e8\"" Jan 23 23:55:34.509061 systemd[1]: Started cri-containerd-6586a60eaeb8db788978494c2e443e871902cbb6dab0ef12acaeef93b74eb9e8.scope - libcontainer container 6586a60eaeb8db788978494c2e443e871902cbb6dab0ef12acaeef93b74eb9e8. Jan 23 23:55:34.610006 containerd[2035]: time="2026-01-23T23:55:34.609927576Z" level=info msg="StartContainer for \"6586a60eaeb8db788978494c2e443e871902cbb6dab0ef12acaeef93b74eb9e8\" returns successfully" Jan 23 23:55:35.031672 kubelet[3406]: I0123 23:55:35.030926 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bwnkx" podStartSLOduration=30.030906922 podStartE2EDuration="30.030906922s" podCreationTimestamp="2026-01-23 23:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:35.030695254 +0000 UTC m=+35.863566911" watchObservedRunningTime="2026-01-23 23:55:35.030906922 +0000 UTC m=+35.863778567" Jan 23 23:55:35.111490 kubelet[3406]: I0123 23:55:35.110700 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s7qvt" podStartSLOduration=30.110161258 podStartE2EDuration="30.110161258s" podCreationTimestamp="2026-01-23 23:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:35.064411426 +0000 UTC m=+35.897283131" watchObservedRunningTime="2026-01-23 23:55:35.110161258 +0000 UTC m=+35.943032879" Jan 23 23:55:42.959333 systemd[1]: Started sshd@7-172.31.20.17:22-4.153.228.146:54820.service - OpenSSH per-connection server daemon (4.153.228.146:54820). Jan 23 23:55:43.460009 sshd[4796]: Accepted publickey for core from 4.153.228.146 port 54820 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:43.462836 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:43.470295 systemd-logind[1998]: New session 8 of user core. Jan 23 23:55:43.478053 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:55:43.971406 sshd[4796]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:43.976661 systemd[1]: sshd@7-172.31.20.17:22-4.153.228.146:54820.service: Deactivated successfully. Jan 23 23:55:43.981454 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:55:43.987094 systemd-logind[1998]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:55:43.989590 systemd-logind[1998]: Removed session 8. Jan 23 23:55:49.081363 systemd[1]: Started sshd@8-172.31.20.17:22-4.153.228.146:48612.service - OpenSSH per-connection server daemon (4.153.228.146:48612). Jan 23 23:55:49.613553 sshd[4810]: Accepted publickey for core from 4.153.228.146 port 48612 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:49.616337 sshd[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:49.625119 systemd-logind[1998]: New session 9 of user core. Jan 23 23:55:49.632283 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:55:50.108487 sshd[4810]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:50.115909 systemd[1]: sshd@8-172.31.20.17:22-4.153.228.146:48612.service: Deactivated successfully. Jan 23 23:55:50.120353 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:55:50.122576 systemd-logind[1998]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:55:50.125589 systemd-logind[1998]: Removed session 9. Jan 23 23:55:55.206275 systemd[1]: Started sshd@9-172.31.20.17:22-4.153.228.146:35372.service - OpenSSH per-connection server daemon (4.153.228.146:35372). Jan 23 23:55:55.697881 sshd[4824]: Accepted publickey for core from 4.153.228.146 port 35372 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:55.701386 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:55.712080 systemd-logind[1998]: New session 10 of user core. Jan 23 23:55:55.720199 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:55:56.165487 sshd[4824]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:56.171230 systemd[1]: sshd@9-172.31.20.17:22-4.153.228.146:35372.service: Deactivated successfully. Jan 23 23:55:56.176629 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:55:56.180143 systemd-logind[1998]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:55:56.182586 systemd-logind[1998]: Removed session 10. Jan 23 23:56:01.270289 systemd[1]: Started sshd@10-172.31.20.17:22-4.153.228.146:35378.service - OpenSSH per-connection server daemon (4.153.228.146:35378). Jan 23 23:56:01.810944 sshd[4840]: Accepted publickey for core from 4.153.228.146 port 35378 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:01.813994 sshd[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:01.824320 systemd-logind[1998]: New session 11 of user core. Jan 23 23:56:01.831100 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:56:02.305112 sshd[4840]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:02.311487 systemd-logind[1998]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:56:02.313140 systemd[1]: sshd@10-172.31.20.17:22-4.153.228.146:35378.service: Deactivated successfully. Jan 23 23:56:02.318681 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:56:02.322285 systemd-logind[1998]: Removed session 11. Jan 23 23:56:02.395306 systemd[1]: Started sshd@11-172.31.20.17:22-4.153.228.146:35388.service - OpenSSH per-connection server daemon (4.153.228.146:35388). Jan 23 23:56:02.901708 sshd[4854]: Accepted publickey for core from 4.153.228.146 port 35388 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:02.904507 sshd[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:02.914043 systemd-logind[1998]: New session 12 of user core. Jan 23 23:56:02.919103 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:56:03.457272 sshd[4854]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:03.464350 systemd[1]: sshd@11-172.31.20.17:22-4.153.228.146:35388.service: Deactivated successfully. Jan 23 23:56:03.469634 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:56:03.471677 systemd-logind[1998]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:56:03.474135 systemd-logind[1998]: Removed session 12. Jan 23 23:56:03.562393 systemd[1]: Started sshd@12-172.31.20.17:22-4.153.228.146:35404.service - OpenSSH per-connection server daemon (4.153.228.146:35404). Jan 23 23:56:04.109296 sshd[4865]: Accepted publickey for core from 4.153.228.146 port 35404 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:04.112270 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:04.121138 systemd-logind[1998]: New session 13 of user core. Jan 23 23:56:04.129038 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:56:04.601103 sshd[4865]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:04.606893 systemd[1]: sshd@12-172.31.20.17:22-4.153.228.146:35404.service: Deactivated successfully. Jan 23 23:56:04.610625 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:56:04.614321 systemd-logind[1998]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:56:04.616697 systemd-logind[1998]: Removed session 13. Jan 23 23:56:09.698992 systemd[1]: Started sshd@13-172.31.20.17:22-4.153.228.146:37954.service - OpenSSH per-connection server daemon (4.153.228.146:37954). Jan 23 23:56:10.191213 sshd[4881]: Accepted publickey for core from 4.153.228.146 port 37954 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:10.194949 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:10.204344 systemd-logind[1998]: New session 14 of user core. Jan 23 23:56:10.212119 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:56:10.668151 sshd[4881]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:10.673434 systemd-logind[1998]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:56:10.674868 systemd[1]: sshd@13-172.31.20.17:22-4.153.228.146:37954.service: Deactivated successfully. Jan 23 23:56:10.679093 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:56:10.684076 systemd-logind[1998]: Removed session 14. Jan 23 23:56:15.760331 systemd[1]: Started sshd@14-172.31.20.17:22-4.153.228.146:57454.service - OpenSSH per-connection server daemon (4.153.228.146:57454). Jan 23 23:56:16.262378 sshd[4894]: Accepted publickey for core from 4.153.228.146 port 57454 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:16.265176 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:16.273097 systemd-logind[1998]: New session 15 of user core. Jan 23 23:56:16.284157 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:56:16.737331 sshd[4894]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:16.748486 systemd[1]: sshd@14-172.31.20.17:22-4.153.228.146:57454.service: Deactivated successfully. Jan 23 23:56:16.753511 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:56:16.755033 systemd-logind[1998]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:56:16.757632 systemd-logind[1998]: Removed session 15. Jan 23 23:56:21.835297 systemd[1]: Started sshd@15-172.31.20.17:22-4.153.228.146:57456.service - OpenSSH per-connection server daemon (4.153.228.146:57456). Jan 23 23:56:22.337265 sshd[4908]: Accepted publickey for core from 4.153.228.146 port 57456 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:22.340050 sshd[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:22.350711 systemd-logind[1998]: New session 16 of user core. Jan 23 23:56:22.361060 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:56:22.807971 sshd[4908]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:22.818314 systemd[1]: sshd@15-172.31.20.17:22-4.153.228.146:57456.service: Deactivated successfully. Jan 23 23:56:22.823375 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:56:22.827558 systemd-logind[1998]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:56:22.830547 systemd-logind[1998]: Removed session 16. Jan 23 23:56:22.917332 systemd[1]: Started sshd@16-172.31.20.17:22-4.153.228.146:57460.service - OpenSSH per-connection server daemon (4.153.228.146:57460). Jan 23 23:56:23.455394 sshd[4920]: Accepted publickey for core from 4.153.228.146 port 57460 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:23.458169 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:23.466274 systemd-logind[1998]: New session 17 of user core. Jan 23 23:56:23.477052 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:56:24.052205 sshd[4920]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:24.059459 systemd-logind[1998]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:56:24.063175 systemd[1]: sshd@16-172.31.20.17:22-4.153.228.146:57460.service: Deactivated successfully. Jan 23 23:56:24.067134 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:56:24.070240 systemd-logind[1998]: Removed session 17. Jan 23 23:56:24.143337 systemd[1]: Started sshd@17-172.31.20.17:22-4.153.228.146:57472.service - OpenSSH per-connection server daemon (4.153.228.146:57472). Jan 23 23:56:24.644636 sshd[4930]: Accepted publickey for core from 4.153.228.146 port 57472 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:24.647480 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:24.655863 systemd-logind[1998]: New session 18 of user core. Jan 23 23:56:24.668115 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:56:25.837218 sshd[4930]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:25.845407 systemd[1]: sshd@17-172.31.20.17:22-4.153.228.146:57472.service: Deactivated successfully. Jan 23 23:56:25.851090 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:56:25.854137 systemd-logind[1998]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:56:25.856978 systemd-logind[1998]: Removed session 18. Jan 23 23:56:25.933460 systemd[1]: Started sshd@18-172.31.20.17:22-4.153.228.146:54116.service - OpenSSH per-connection server daemon (4.153.228.146:54116). Jan 23 23:56:26.443177 sshd[4946]: Accepted publickey for core from 4.153.228.146 port 54116 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:26.446067 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:26.455386 systemd-logind[1998]: New session 19 of user core. Jan 23 23:56:26.470114 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:56:27.177331 sshd[4946]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:27.184392 systemd-logind[1998]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:56:27.186081 systemd[1]: sshd@18-172.31.20.17:22-4.153.228.146:54116.service: Deactivated successfully. Jan 23 23:56:27.190692 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:56:27.194166 systemd-logind[1998]: Removed session 19. Jan 23 23:56:27.271374 systemd[1]: Started sshd@19-172.31.20.17:22-4.153.228.146:54132.service - OpenSSH per-connection server daemon (4.153.228.146:54132). Jan 23 23:56:27.764972 sshd[4959]: Accepted publickey for core from 4.153.228.146 port 54132 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:27.767630 sshd[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:27.776847 systemd-logind[1998]: New session 20 of user core. Jan 23 23:56:27.782107 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:56:28.228371 sshd[4959]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:28.235747 systemd[1]: sshd@19-172.31.20.17:22-4.153.228.146:54132.service: Deactivated successfully. Jan 23 23:56:28.242617 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:56:28.246193 systemd-logind[1998]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:56:28.249400 systemd-logind[1998]: Removed session 20. Jan 23 23:56:33.331278 systemd[1]: Started sshd@20-172.31.20.17:22-4.153.228.146:54146.service - OpenSSH per-connection server daemon (4.153.228.146:54146). Jan 23 23:56:33.823951 sshd[4974]: Accepted publickey for core from 4.153.228.146 port 54146 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:33.826710 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:33.836287 systemd-logind[1998]: New session 21 of user core. Jan 23 23:56:33.844106 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:56:34.298805 sshd[4974]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:34.307380 systemd[1]: sshd@20-172.31.20.17:22-4.153.228.146:54146.service: Deactivated successfully. Jan 23 23:56:34.311959 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:56:34.314263 systemd-logind[1998]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:56:34.316741 systemd-logind[1998]: Removed session 21. Jan 23 23:56:39.392371 systemd[1]: Started sshd@21-172.31.20.17:22-4.153.228.146:35108.service - OpenSSH per-connection server daemon (4.153.228.146:35108). Jan 23 23:56:39.888226 sshd[4990]: Accepted publickey for core from 4.153.228.146 port 35108 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:39.892032 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:39.900278 systemd-logind[1998]: New session 22 of user core. Jan 23 23:56:39.907046 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:56:40.357107 sshd[4990]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:40.369030 systemd[1]: sshd@21-172.31.20.17:22-4.153.228.146:35108.service: Deactivated successfully. Jan 23 23:56:40.377694 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:56:40.382590 systemd-logind[1998]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:56:40.388244 systemd-logind[1998]: Removed session 22. Jan 23 23:56:45.466314 systemd[1]: Started sshd@22-172.31.20.17:22-4.153.228.146:50418.service - OpenSSH per-connection server daemon (4.153.228.146:50418). Jan 23 23:56:46.015521 sshd[5003]: Accepted publickey for core from 4.153.228.146 port 50418 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:46.018287 sshd[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:46.027355 systemd-logind[1998]: New session 23 of user core. Jan 23 23:56:46.035110 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:56:46.518148 sshd[5003]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:46.523782 systemd-logind[1998]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:56:46.525196 systemd[1]: sshd@22-172.31.20.17:22-4.153.228.146:50418.service: Deactivated successfully. Jan 23 23:56:46.528560 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:56:46.533521 systemd-logind[1998]: Removed session 23. Jan 23 23:56:46.608304 systemd[1]: Started sshd@23-172.31.20.17:22-4.153.228.146:50420.service - OpenSSH per-connection server daemon (4.153.228.146:50420). Jan 23 23:56:47.109748 sshd[5016]: Accepted publickey for core from 4.153.228.146 port 50420 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:47.112605 sshd[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:47.121862 systemd-logind[1998]: New session 24 of user core. Jan 23 23:56:47.136081 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:56:50.120255 systemd[1]: run-containerd-runc-k8s.io-ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee-runc.lLxmyf.mount: Deactivated successfully. Jan 23 23:56:50.139986 containerd[2035]: time="2026-01-23T23:56:50.138887723Z" level=info msg="StopContainer for \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\" with timeout 30 (s)" Jan 23 23:56:50.142358 containerd[2035]: time="2026-01-23T23:56:50.140789183Z" level=info msg="Stop container \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\" with signal terminated" Jan 23 23:56:50.211344 containerd[2035]: time="2026-01-23T23:56:50.211267031Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:56:50.224503 containerd[2035]: time="2026-01-23T23:56:50.224323595Z" level=info msg="StopContainer for \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\" with timeout 2 (s)" Jan 23 23:56:50.225363 containerd[2035]: time="2026-01-23T23:56:50.225218327Z" level=info msg="Stop container \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\" with signal terminated" Jan 23 23:56:50.229513 systemd[1]: cri-containerd-798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db.scope: Deactivated successfully. Jan 23 23:56:50.250282 systemd-networkd[1930]: lxc_health: Link DOWN Jan 23 23:56:50.250296 systemd-networkd[1930]: lxc_health: Lost carrier Jan 23 23:56:50.281065 systemd[1]: cri-containerd-ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee.scope: Deactivated successfully. Jan 23 23:56:50.281809 systemd[1]: cri-containerd-ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee.scope: Consumed 15.126s CPU time. Jan 23 23:56:50.305726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db-rootfs.mount: Deactivated successfully. Jan 23 23:56:50.319166 containerd[2035]: time="2026-01-23T23:56:50.319083000Z" level=info msg="shim disconnected" id=798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db namespace=k8s.io Jan 23 23:56:50.319737 containerd[2035]: time="2026-01-23T23:56:50.319462452Z" level=warning msg="cleaning up after shim disconnected" id=798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db namespace=k8s.io Jan 23 23:56:50.319737 containerd[2035]: time="2026-01-23T23:56:50.319562376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:50.340196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee-rootfs.mount: Deactivated successfully. Jan 23 23:56:50.353976 containerd[2035]: time="2026-01-23T23:56:50.353701884Z" level=info msg="shim disconnected" id=ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee namespace=k8s.io Jan 23 23:56:50.353976 containerd[2035]: time="2026-01-23T23:56:50.353897736Z" level=warning msg="cleaning up after shim disconnected" id=ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee namespace=k8s.io Jan 23 23:56:50.353976 containerd[2035]: time="2026-01-23T23:56:50.353921196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:50.360821 containerd[2035]: time="2026-01-23T23:56:50.360497988Z" level=info msg="StopContainer for \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\" returns successfully" Jan 23 23:56:50.361928 containerd[2035]: time="2026-01-23T23:56:50.361726224Z" level=info msg="StopPodSandbox for \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\"" Jan 23 23:56:50.362207 containerd[2035]: time="2026-01-23T23:56:50.362016744Z" level=info msg="Container to stop \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:50.367944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557-shm.mount: Deactivated successfully. Jan 23 23:56:50.389412 containerd[2035]: time="2026-01-23T23:56:50.387618480Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:56:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:56:50.391261 systemd[1]: cri-containerd-3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557.scope: Deactivated successfully. Jan 23 23:56:50.397788 containerd[2035]: time="2026-01-23T23:56:50.397229544Z" level=info msg="StopContainer for \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\" returns successfully" Jan 23 23:56:50.398523 containerd[2035]: time="2026-01-23T23:56:50.398473908Z" level=info msg="StopPodSandbox for \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\"" Jan 23 23:56:50.399915 containerd[2035]: time="2026-01-23T23:56:50.399856020Z" level=info msg="Container to stop \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:50.400393 containerd[2035]: time="2026-01-23T23:56:50.400094568Z" level=info msg="Container to stop \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:50.400393 containerd[2035]: time="2026-01-23T23:56:50.400127748Z" level=info msg="Container to stop \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:50.400393 containerd[2035]: time="2026-01-23T23:56:50.400155720Z" level=info msg="Container to stop \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:50.400393 containerd[2035]: time="2026-01-23T23:56:50.400181040Z" level=info msg="Container to stop \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:56:50.423445 systemd[1]: cri-containerd-5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482.scope: Deactivated successfully. Jan 23 23:56:50.474837 containerd[2035]: time="2026-01-23T23:56:50.474535729Z" level=info msg="shim disconnected" id=3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557 namespace=k8s.io Jan 23 23:56:50.474837 containerd[2035]: time="2026-01-23T23:56:50.474624049Z" level=warning msg="cleaning up after shim disconnected" id=3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557 namespace=k8s.io Jan 23 23:56:50.474837 containerd[2035]: time="2026-01-23T23:56:50.474645673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:50.502498 containerd[2035]: time="2026-01-23T23:56:50.502404613Z" level=info msg="shim disconnected" id=5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482 namespace=k8s.io Jan 23 23:56:50.503031 containerd[2035]: time="2026-01-23T23:56:50.502929769Z" level=warning msg="cleaning up after shim disconnected" id=5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482 namespace=k8s.io Jan 23 23:56:50.503312 containerd[2035]: time="2026-01-23T23:56:50.502968925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:50.515884 containerd[2035]: time="2026-01-23T23:56:50.515007529Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:56:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:56:50.517887 containerd[2035]: time="2026-01-23T23:56:50.517817017Z" level=info msg="TearDown network for sandbox \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" successfully" Jan 23 23:56:50.517887 containerd[2035]: time="2026-01-23T23:56:50.517875805Z" level=info msg="StopPodSandbox for \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" returns successfully" Jan 23 23:56:50.544865 containerd[2035]: time="2026-01-23T23:56:50.544805053Z" level=info msg="TearDown network for sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" successfully" Jan 23 23:56:50.545085 containerd[2035]: time="2026-01-23T23:56:50.545051569Z" level=info msg="StopPodSandbox for \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" returns successfully" Jan 23 23:56:50.676682 kubelet[3406]: I0123 23:56:50.676529 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cni-path\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.677922 kubelet[3406]: I0123 23:56:50.677397 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cni-path" (OuterVolumeSpecName: "cni-path") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.677922 kubelet[3406]: I0123 23:56:50.677597 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-bpf-maps\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.677922 kubelet[3406]: I0123 23:56:50.677680 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.677922 kubelet[3406]: I0123 23:56:50.677716 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-lib-modules\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.677922 kubelet[3406]: I0123 23:56:50.677853 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.678243 kubelet[3406]: I0123 23:56:50.677884 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-hubble-tls\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.679801 kubelet[3406]: I0123 23:56:50.678445 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b81905ac-8b8b-4edb-ba7f-110cb96aec81-cilium-config-path\") pod \"b81905ac-8b8b-4edb-ba7f-110cb96aec81\" (UID: \"b81905ac-8b8b-4edb-ba7f-110cb96aec81\") " Jan 23 23:56:50.679801 kubelet[3406]: I0123 23:56:50.678627 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6c8p\" (UniqueName: \"kubernetes.io/projected/b81905ac-8b8b-4edb-ba7f-110cb96aec81-kube-api-access-g6c8p\") pod \"b81905ac-8b8b-4edb-ba7f-110cb96aec81\" (UID: \"b81905ac-8b8b-4edb-ba7f-110cb96aec81\") " Jan 23 23:56:50.679801 kubelet[3406]: I0123 23:56:50.678697 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41554546-9c9d-4a11-9254-80ecf5595a66-clustermesh-secrets\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.679801 kubelet[3406]: I0123 23:56:50.678746 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-config-path\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.679801 kubelet[3406]: I0123 23:56:50.678806 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-net\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.679801 kubelet[3406]: I0123 23:56:50.678841 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-etc-cni-netd\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.680188 kubelet[3406]: I0123 23:56:50.678873 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-kernel\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.680188 kubelet[3406]: I0123 23:56:50.678906 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-xtables-lock\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.680188 kubelet[3406]: I0123 23:56:50.678947 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl4xd\" (UniqueName: \"kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-kube-api-access-pl4xd\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.680188 kubelet[3406]: I0123 23:56:50.678983 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-cgroup\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.680188 kubelet[3406]: I0123 23:56:50.679015 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-run\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.680188 kubelet[3406]: I0123 23:56:50.679048 3406 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-hostproc\") pod \"41554546-9c9d-4a11-9254-80ecf5595a66\" (UID: \"41554546-9c9d-4a11-9254-80ecf5595a66\") " Jan 23 23:56:50.680490 kubelet[3406]: I0123 23:56:50.679121 3406 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cni-path\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.680490 kubelet[3406]: I0123 23:56:50.679145 3406 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-bpf-maps\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.680490 kubelet[3406]: I0123 23:56:50.679167 3406 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-lib-modules\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.680490 kubelet[3406]: I0123 23:56:50.679212 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-hostproc" (OuterVolumeSpecName: "hostproc") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.683137 kubelet[3406]: I0123 23:56:50.683069 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.683137 kubelet[3406]: I0123 23:56:50.683177 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.683655 kubelet[3406]: I0123 23:56:50.683230 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.685421 kubelet[3406]: I0123 23:56:50.685312 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.685421 kubelet[3406]: I0123 23:56:50.685399 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.688068 kubelet[3406]: I0123 23:56:50.687826 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:56:50.694897 kubelet[3406]: I0123 23:56:50.694045 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:50.697395 kubelet[3406]: I0123 23:56:50.697333 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81905ac-8b8b-4edb-ba7f-110cb96aec81-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b81905ac-8b8b-4edb-ba7f-110cb96aec81" (UID: "b81905ac-8b8b-4edb-ba7f-110cb96aec81"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:56:50.698930 kubelet[3406]: I0123 23:56:50.698040 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81905ac-8b8b-4edb-ba7f-110cb96aec81-kube-api-access-g6c8p" (OuterVolumeSpecName: "kube-api-access-g6c8p") pod "b81905ac-8b8b-4edb-ba7f-110cb96aec81" (UID: "b81905ac-8b8b-4edb-ba7f-110cb96aec81"). InnerVolumeSpecName "kube-api-access-g6c8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:50.700589 kubelet[3406]: I0123 23:56:50.700498 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-kube-api-access-pl4xd" (OuterVolumeSpecName: "kube-api-access-pl4xd") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "kube-api-access-pl4xd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:50.701157 kubelet[3406]: I0123 23:56:50.701108 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41554546-9c9d-4a11-9254-80ecf5595a66-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:56:50.703439 kubelet[3406]: I0123 23:56:50.703373 3406 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41554546-9c9d-4a11-9254-80ecf5595a66" (UID: "41554546-9c9d-4a11-9254-80ecf5595a66"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:56:50.779478 kubelet[3406]: I0123 23:56:50.779399 3406 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g6c8p\" (UniqueName: \"kubernetes.io/projected/b81905ac-8b8b-4edb-ba7f-110cb96aec81-kube-api-access-g6c8p\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779478 kubelet[3406]: I0123 23:56:50.779465 3406 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41554546-9c9d-4a11-9254-80ecf5595a66-clustermesh-secrets\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779494 3406 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-config-path\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779515 3406 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-net\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779536 3406 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-etc-cni-netd\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779557 3406 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-host-proc-sys-kernel\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779576 3406 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-xtables-lock\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779597 3406 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pl4xd\" (UniqueName: \"kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-kube-api-access-pl4xd\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779617 3406 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-cgroup\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.779693 kubelet[3406]: I0123 23:56:50.779638 3406 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-cilium-run\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.780151 kubelet[3406]: I0123 23:56:50.779657 3406 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41554546-9c9d-4a11-9254-80ecf5595a66-hostproc\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.780151 kubelet[3406]: I0123 23:56:50.779679 3406 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41554546-9c9d-4a11-9254-80ecf5595a66-hubble-tls\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:50.780151 kubelet[3406]: I0123 23:56:50.779700 3406 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b81905ac-8b8b-4edb-ba7f-110cb96aec81-cilium-config-path\") on node \"ip-172-31-20-17\" DevicePath \"\"" Jan 23 23:56:51.097875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557-rootfs.mount: Deactivated successfully. Jan 23 23:56:51.098047 systemd[1]: var-lib-kubelet-pods-b81905ac\x2d8b8b\x2d4edb\x2dba7f\x2d110cb96aec81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg6c8p.mount: Deactivated successfully. Jan 23 23:56:51.098185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482-rootfs.mount: Deactivated successfully. Jan 23 23:56:51.098323 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482-shm.mount: Deactivated successfully. Jan 23 23:56:51.098457 systemd[1]: var-lib-kubelet-pods-41554546\x2d9c9d\x2d4a11\x2d9254\x2d80ecf5595a66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpl4xd.mount: Deactivated successfully. Jan 23 23:56:51.098599 systemd[1]: var-lib-kubelet-pods-41554546\x2d9c9d\x2d4a11\x2d9254\x2d80ecf5595a66-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 23:56:51.098729 systemd[1]: var-lib-kubelet-pods-41554546\x2d9c9d\x2d4a11\x2d9254\x2d80ecf5595a66-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 23:56:51.236135 kubelet[3406]: I0123 23:56:51.235915 3406 scope.go:117] "RemoveContainer" containerID="ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee" Jan 23 23:56:51.241134 containerd[2035]: time="2026-01-23T23:56:51.240817776Z" level=info msg="RemoveContainer for \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\"" Jan 23 23:56:51.264086 systemd[1]: Removed slice kubepods-burstable-pod41554546_9c9d_4a11_9254_80ecf5595a66.slice - libcontainer container kubepods-burstable-pod41554546_9c9d_4a11_9254_80ecf5595a66.slice. Jan 23 23:56:51.267669 containerd[2035]: time="2026-01-23T23:56:51.264516144Z" level=info msg="RemoveContainer for \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\" returns successfully" Jan 23 23:56:51.268520 kubelet[3406]: I0123 23:56:51.265687 3406 scope.go:117] "RemoveContainer" containerID="d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913" Jan 23 23:56:51.266366 systemd[1]: kubepods-burstable-pod41554546_9c9d_4a11_9254_80ecf5595a66.slice: Consumed 15.314s CPU time. Jan 23 23:56:51.274249 containerd[2035]: time="2026-01-23T23:56:51.273711265Z" level=info msg="RemoveContainer for \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\"" Jan 23 23:56:51.282122 containerd[2035]: time="2026-01-23T23:56:51.282063445Z" level=info msg="RemoveContainer for \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\" returns successfully" Jan 23 23:56:51.287178 kubelet[3406]: I0123 23:56:51.284643 3406 scope.go:117] "RemoveContainer" containerID="54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771" Jan 23 23:56:51.285833 systemd[1]: Removed slice kubepods-besteffort-podb81905ac_8b8b_4edb_ba7f_110cb96aec81.slice - libcontainer container kubepods-besteffort-podb81905ac_8b8b_4edb_ba7f_110cb96aec81.slice. Jan 23 23:56:51.292673 containerd[2035]: time="2026-01-23T23:56:51.292601533Z" level=info msg="RemoveContainer for \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\"" Jan 23 23:56:51.305015 containerd[2035]: time="2026-01-23T23:56:51.304885681Z" level=info msg="RemoveContainer for \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\" returns successfully" Jan 23 23:56:51.306000 kubelet[3406]: I0123 23:56:51.305931 3406 scope.go:117] "RemoveContainer" containerID="6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035" Jan 23 23:56:51.312440 containerd[2035]: time="2026-01-23T23:56:51.312365353Z" level=info msg="RemoveContainer for \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\"" Jan 23 23:56:51.324809 containerd[2035]: time="2026-01-23T23:56:51.323552881Z" level=info msg="RemoveContainer for \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\" returns successfully" Jan 23 23:56:51.326034 kubelet[3406]: I0123 23:56:51.325852 3406 scope.go:117] "RemoveContainer" containerID="0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1" Jan 23 23:56:51.334697 containerd[2035]: time="2026-01-23T23:56:51.334646377Z" level=info msg="RemoveContainer for \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\"" Jan 23 23:56:51.340930 containerd[2035]: time="2026-01-23T23:56:51.340875973Z" level=info msg="RemoveContainer for \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\" returns successfully" Jan 23 23:56:51.341451 kubelet[3406]: I0123 23:56:51.341411 3406 scope.go:117] "RemoveContainer" containerID="ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee" Jan 23 23:56:51.342011 containerd[2035]: time="2026-01-23T23:56:51.341866933Z" level=error msg="ContainerStatus for \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\": not found" Jan 23 23:56:51.342287 kubelet[3406]: E0123 23:56:51.342241 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\": not found" containerID="ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee" Jan 23 23:56:51.342383 kubelet[3406]: I0123 23:56:51.342318 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee"} err="failed to get container status \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba78b3b86eabbeb01941ce7946018479b43a612c3d486812bf62d45215f818ee\": not found" Jan 23 23:56:51.342440 kubelet[3406]: I0123 23:56:51.342381 3406 scope.go:117] "RemoveContainer" containerID="d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913" Jan 23 23:56:51.343016 containerd[2035]: time="2026-01-23T23:56:51.342886297Z" level=error msg="ContainerStatus for \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\": not found" Jan 23 23:56:51.343150 kubelet[3406]: E0123 23:56:51.343121 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\": not found" containerID="d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913" Jan 23 23:56:51.343211 kubelet[3406]: I0123 23:56:51.343167 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913"} err="failed to get container status \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\": rpc error: code = NotFound desc = an error occurred when try to find container \"d71aacca5c72910141cee7de621b912e170ea6676742eb5e9b543dd061d62913\": not found" Jan 23 23:56:51.343211 kubelet[3406]: I0123 23:56:51.343198 3406 scope.go:117] "RemoveContainer" containerID="54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771" Jan 23 23:56:51.343645 containerd[2035]: time="2026-01-23T23:56:51.343484845Z" level=error msg="ContainerStatus for \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\": not found" Jan 23 23:56:51.343932 kubelet[3406]: E0123 23:56:51.343887 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\": not found" containerID="54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771" Jan 23 23:56:51.344028 kubelet[3406]: I0123 23:56:51.343944 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771"} err="failed to get container status \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\": rpc error: code = NotFound desc = an error occurred when try to find container \"54215b4892aaf16414390f43b3e868cc27e8bfe6542a7f3b7cb0d80bb8540771\": not found" Jan 23 23:56:51.344028 kubelet[3406]: I0123 23:56:51.344020 3406 scope.go:117] "RemoveContainer" containerID="6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035" Jan 23 23:56:51.344682 containerd[2035]: time="2026-01-23T23:56:51.344538853Z" level=error msg="ContainerStatus for \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\": not found" Jan 23 23:56:51.344882 kubelet[3406]: E0123 23:56:51.344828 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\": not found" containerID="6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035" Jan 23 23:56:51.345051 kubelet[3406]: I0123 23:56:51.345000 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035"} err="failed to get container status \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\": rpc error: code = NotFound desc = an error occurred when try to find container \"6852dfc948fab6b818e4b115ebcbffbc447ef58c75a95f271fe604f560e20035\": not found" Jan 23 23:56:51.345131 kubelet[3406]: I0123 23:56:51.345054 3406 scope.go:117] "RemoveContainer" containerID="0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1" Jan 23 23:56:51.345720 containerd[2035]: time="2026-01-23T23:56:51.345657397Z" level=error msg="ContainerStatus for \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\": not found" Jan 23 23:56:51.346046 kubelet[3406]: E0123 23:56:51.345999 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\": not found" containerID="0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1" Jan 23 23:56:51.346124 kubelet[3406]: I0123 23:56:51.346057 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1"} err="failed to get container status \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\": rpc error: code = NotFound desc = an error occurred when try to find container \"0216e1e7bc44d5c6af75b3755741877c67989c44994fd96cfa02d1e469be7fa1\": not found" Jan 23 23:56:51.346124 kubelet[3406]: I0123 23:56:51.346092 3406 scope.go:117] "RemoveContainer" containerID="798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db" Jan 23 23:56:51.348201 containerd[2035]: time="2026-01-23T23:56:51.348145741Z" level=info msg="RemoveContainer for \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\"" Jan 23 23:56:51.354199 containerd[2035]: time="2026-01-23T23:56:51.354037261Z" level=info msg="RemoveContainer for \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\" returns successfully" Jan 23 23:56:51.355464 kubelet[3406]: I0123 23:56:51.355422 3406 scope.go:117] "RemoveContainer" containerID="798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db" Jan 23 23:56:51.356801 containerd[2035]: time="2026-01-23T23:56:51.356664001Z" level=error msg="ContainerStatus for \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\": not found" Jan 23 23:56:51.358120 kubelet[3406]: E0123 23:56:51.358055 3406 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\": not found" containerID="798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db" Jan 23 23:56:51.358249 kubelet[3406]: I0123 23:56:51.358118 3406 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db"} err="failed to get container status \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\": rpc error: code = NotFound desc = an error occurred when try to find container \"798a8179debbdbb35a7306ed86697ad9679729644b4fc110ce210f00d4d310db\": not found" Jan 23 23:56:51.721395 kubelet[3406]: I0123 23:56:51.721321 3406 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41554546-9c9d-4a11-9254-80ecf5595a66" path="/var/lib/kubelet/pods/41554546-9c9d-4a11-9254-80ecf5595a66/volumes" Jan 23 23:56:51.722903 kubelet[3406]: I0123 23:56:51.722843 3406 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81905ac-8b8b-4edb-ba7f-110cb96aec81" path="/var/lib/kubelet/pods/b81905ac-8b8b-4edb-ba7f-110cb96aec81/volumes" Jan 23 23:56:52.060171 sshd[5016]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:52.066669 systemd-logind[1998]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:56:52.067110 systemd[1]: sshd@23-172.31.20.17:22-4.153.228.146:50420.service: Deactivated successfully. Jan 23 23:56:52.072120 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:56:52.072693 systemd[1]: session-24.scope: Consumed 1.980s CPU time. Jan 23 23:56:52.076635 systemd-logind[1998]: Removed session 24. Jan 23 23:56:52.167371 systemd[1]: Started sshd@24-172.31.20.17:22-4.153.228.146:50428.service - OpenSSH per-connection server daemon (4.153.228.146:50428). Jan 23 23:56:52.708319 sshd[5177]: Accepted publickey for core from 4.153.228.146 port 50428 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:52.711829 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:52.721896 systemd-logind[1998]: New session 25 of user core. Jan 23 23:56:52.730139 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:56:52.869046 ntpd[1992]: Deleting interface #12 lxc_health, fe80::582c:afff:fe84:3dc8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Jan 23 23:56:52.869559 ntpd[1992]: 23 Jan 23:56:52 ntpd[1992]: Deleting interface #12 lxc_health, fe80::582c:afff:fe84:3dc8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Jan 23 23:56:54.687871 systemd[1]: Created slice kubepods-burstable-podafb6fdf7_409e_4bc2_9d54_24e85e675690.slice - libcontainer container kubepods-burstable-podafb6fdf7_409e_4bc2_9d54_24e85e675690.slice. Jan 23 23:56:54.721096 sshd[5177]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:54.729669 systemd[1]: sshd@24-172.31.20.17:22-4.153.228.146:50428.service: Deactivated successfully. Jan 23 23:56:54.735554 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:56:54.738018 systemd[1]: session-25.scope: Consumed 1.515s CPU time. Jan 23 23:56:54.746659 systemd-logind[1998]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:56:54.750118 systemd-logind[1998]: Removed session 25. Jan 23 23:56:54.807374 kubelet[3406]: I0123 23:56:54.807166 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-cni-path\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.807374 kubelet[3406]: I0123 23:56:54.807242 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afb6fdf7-409e-4bc2-9d54-24e85e675690-cilium-config-path\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.807374 kubelet[3406]: I0123 23:56:54.807370 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-host-proc-sys-kernel\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810485 kubelet[3406]: I0123 23:56:54.807423 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-hostproc\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810485 kubelet[3406]: I0123 23:56:54.807464 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afb6fdf7-409e-4bc2-9d54-24e85e675690-hubble-tls\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810485 kubelet[3406]: I0123 23:56:54.807502 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-cilium-run\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810485 kubelet[3406]: I0123 23:56:54.807540 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-bpf-maps\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810485 kubelet[3406]: I0123 23:56:54.807576 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-xtables-lock\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810485 kubelet[3406]: I0123 23:56:54.807612 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afb6fdf7-409e-4bc2-9d54-24e85e675690-clustermesh-secrets\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.809054 systemd[1]: Started sshd@25-172.31.20.17:22-4.153.228.146:53572.service - OpenSSH per-connection server daemon (4.153.228.146:53572). Jan 23 23:56:54.810964 kubelet[3406]: I0123 23:56:54.807645 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/afb6fdf7-409e-4bc2-9d54-24e85e675690-cilium-ipsec-secrets\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810964 kubelet[3406]: I0123 23:56:54.807682 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-host-proc-sys-net\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810964 kubelet[3406]: I0123 23:56:54.807718 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww98r\" (UniqueName: \"kubernetes.io/projected/afb6fdf7-409e-4bc2-9d54-24e85e675690-kube-api-access-ww98r\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810964 kubelet[3406]: I0123 23:56:54.807789 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-cilium-cgroup\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.810964 kubelet[3406]: I0123 23:56:54.807835 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-etc-cni-netd\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.811229 kubelet[3406]: I0123 23:56:54.807875 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afb6fdf7-409e-4bc2-9d54-24e85e675690-lib-modules\") pod \"cilium-rs2xb\" (UID: \"afb6fdf7-409e-4bc2-9d54-24e85e675690\") " pod="kube-system/cilium-rs2xb" Jan 23 23:56:54.882124 kubelet[3406]: E0123 23:56:54.882043 3406 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:56:55.002942 containerd[2035]: time="2026-01-23T23:56:55.002543067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rs2xb,Uid:afb6fdf7-409e-4bc2-9d54-24e85e675690,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:55.051924 containerd[2035]: time="2026-01-23T23:56:55.051627675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:55.051924 containerd[2035]: time="2026-01-23T23:56:55.051746643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:55.051924 containerd[2035]: time="2026-01-23T23:56:55.051830223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:55.053976 containerd[2035]: time="2026-01-23T23:56:55.052017663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:55.082134 systemd[1]: Started cri-containerd-6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499.scope - libcontainer container 6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499. Jan 23 23:56:55.124955 containerd[2035]: time="2026-01-23T23:56:55.124727212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rs2xb,Uid:afb6fdf7-409e-4bc2-9d54-24e85e675690,Namespace:kube-system,Attempt:0,} returns sandbox id \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\"" Jan 23 23:56:55.136169 containerd[2035]: time="2026-01-23T23:56:55.136096672Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:56:55.157677 containerd[2035]: time="2026-01-23T23:56:55.157615636Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6\"" Jan 23 23:56:55.158977 containerd[2035]: time="2026-01-23T23:56:55.158901100Z" level=info msg="StartContainer for \"e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6\"" Jan 23 23:56:55.212406 systemd[1]: Started cri-containerd-e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6.scope - libcontainer container e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6. Jan 23 23:56:55.267098 containerd[2035]: time="2026-01-23T23:56:55.265310248Z" level=info msg="StartContainer for \"e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6\" returns successfully" Jan 23 23:56:55.299673 systemd[1]: cri-containerd-e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6.scope: Deactivated successfully. Jan 23 23:56:55.325328 sshd[5188]: Accepted publickey for core from 4.153.228.146 port 53572 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:55.329378 sshd[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:55.347490 systemd-logind[1998]: New session 26 of user core. Jan 23 23:56:55.352418 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 23:56:55.388037 containerd[2035]: time="2026-01-23T23:56:55.387906521Z" level=info msg="shim disconnected" id=e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6 namespace=k8s.io Jan 23 23:56:55.388281 containerd[2035]: time="2026-01-23T23:56:55.388018085Z" level=warning msg="cleaning up after shim disconnected" id=e059ddd0b8bbdf9d4da88d2f2d2194463632c78262b4074773a5e69ef1c3fdc6 namespace=k8s.io Jan 23 23:56:55.388281 containerd[2035]: time="2026-01-23T23:56:55.388091861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:55.674437 sshd[5188]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:55.679375 systemd-logind[1998]: Session 26 logged out. Waiting for processes to exit. Jan 23 23:56:55.680592 systemd[1]: sshd@25-172.31.20.17:22-4.153.228.146:53572.service: Deactivated successfully. Jan 23 23:56:55.685289 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 23:56:55.689730 systemd-logind[1998]: Removed session 26. Jan 23 23:56:55.784260 systemd[1]: Started sshd@26-172.31.20.17:22-4.153.228.146:53578.service - OpenSSH per-connection server daemon (4.153.228.146:53578). Jan 23 23:56:56.289866 containerd[2035]: time="2026-01-23T23:56:56.289705997Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:56:56.319803 sshd[5305]: Accepted publickey for core from 4.153.228.146 port 53578 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:56.317989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835455210.mount: Deactivated successfully. Jan 23 23:56:56.323462 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:56.327426 containerd[2035]: time="2026-01-23T23:56:56.327337506Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781\"" Jan 23 23:56:56.329988 containerd[2035]: time="2026-01-23T23:56:56.329512926Z" level=info msg="StartContainer for \"eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781\"" Jan 23 23:56:56.343196 systemd-logind[1998]: New session 27 of user core. Jan 23 23:56:56.349373 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 23:56:56.409133 systemd[1]: Started cri-containerd-eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781.scope - libcontainer container eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781. Jan 23 23:56:56.462393 containerd[2035]: time="2026-01-23T23:56:56.462313386Z" level=info msg="StartContainer for \"eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781\" returns successfully" Jan 23 23:56:56.479252 systemd[1]: cri-containerd-eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781.scope: Deactivated successfully. Jan 23 23:56:56.523478 containerd[2035]: time="2026-01-23T23:56:56.523230487Z" level=info msg="shim disconnected" id=eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781 namespace=k8s.io Jan 23 23:56:56.523478 containerd[2035]: time="2026-01-23T23:56:56.523370539Z" level=warning msg="cleaning up after shim disconnected" id=eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781 namespace=k8s.io Jan 23 23:56:56.523478 containerd[2035]: time="2026-01-23T23:56:56.523393195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:56.919047 systemd[1]: run-containerd-runc-k8s.io-eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781-runc.x3bfQU.mount: Deactivated successfully. Jan 23 23:56:56.919222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eabe0afb370b3c223f8774030e7a05513f5f03043115c2573ceccb351b804781-rootfs.mount: Deactivated successfully. Jan 23 23:56:57.300103 containerd[2035]: time="2026-01-23T23:56:57.299526654Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:56:57.333997 containerd[2035]: time="2026-01-23T23:56:57.333543883Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3\"" Jan 23 23:56:57.337277 containerd[2035]: time="2026-01-23T23:56:57.337212319Z" level=info msg="StartContainer for \"ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3\"" Jan 23 23:56:57.433076 systemd[1]: Started cri-containerd-ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3.scope - libcontainer container ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3. Jan 23 23:56:57.528586 containerd[2035]: time="2026-01-23T23:56:57.528488060Z" level=info msg="StartContainer for \"ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3\" returns successfully" Jan 23 23:56:57.557121 systemd[1]: cri-containerd-ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3.scope: Deactivated successfully. Jan 23 23:56:57.618399 containerd[2035]: time="2026-01-23T23:56:57.618250820Z" level=info msg="shim disconnected" id=ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3 namespace=k8s.io Jan 23 23:56:57.618399 containerd[2035]: time="2026-01-23T23:56:57.618397748Z" level=warning msg="cleaning up after shim disconnected" id=ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3 namespace=k8s.io Jan 23 23:56:57.618800 containerd[2035]: time="2026-01-23T23:56:57.618422156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:57.919671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec47f80b972e6da7f0fe33cdc14145cb480ea59a6c472c188b661d9aba498ec3-rootfs.mount: Deactivated successfully. Jan 23 23:56:58.307677 containerd[2035]: time="2026-01-23T23:56:58.307527475Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:56:58.349819 containerd[2035]: time="2026-01-23T23:56:58.346963232Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb\"" Jan 23 23:56:58.349819 containerd[2035]: time="2026-01-23T23:56:58.348337532Z" level=info msg="StartContainer for \"fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb\"" Jan 23 23:56:58.407114 systemd[1]: Started cri-containerd-fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb.scope - libcontainer container fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb. Jan 23 23:56:58.464492 systemd[1]: cri-containerd-fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb.scope: Deactivated successfully. Jan 23 23:56:58.472881 containerd[2035]: time="2026-01-23T23:56:58.472800452Z" level=info msg="StartContainer for \"fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb\" returns successfully" Jan 23 23:56:58.514275 containerd[2035]: time="2026-01-23T23:56:58.513945236Z" level=info msg="shim disconnected" id=fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb namespace=k8s.io Jan 23 23:56:58.514275 containerd[2035]: time="2026-01-23T23:56:58.514080692Z" level=warning msg="cleaning up after shim disconnected" id=fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb namespace=k8s.io Jan 23 23:56:58.514275 containerd[2035]: time="2026-01-23T23:56:58.514103108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:58.919128 systemd[1]: run-containerd-runc-k8s.io-fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb-runc.4lQJvT.mount: Deactivated successfully. Jan 23 23:56:58.919306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc55157a756393fa1f0347e5203fa7ba8c908d85bbdd84b7958146206c339eb-rootfs.mount: Deactivated successfully. Jan 23 23:56:59.315827 containerd[2035]: time="2026-01-23T23:56:59.315614888Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:56:59.351135 containerd[2035]: time="2026-01-23T23:56:59.351008181Z" level=info msg="CreateContainer within sandbox \"6859d70936a82cf3f59703a46f50cf65d612d3256e725a7dd2a32dbf8242b499\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ee98c511812bc1582dabdda5602dae70af0ab2852f97ad1fd3d1d1d10d91648\"" Jan 23 23:56:59.353269 containerd[2035]: time="2026-01-23T23:56:59.353205381Z" level=info msg="StartContainer for \"3ee98c511812bc1582dabdda5602dae70af0ab2852f97ad1fd3d1d1d10d91648\"" Jan 23 23:56:59.408099 systemd[1]: Started cri-containerd-3ee98c511812bc1582dabdda5602dae70af0ab2852f97ad1fd3d1d1d10d91648.scope - libcontainer container 3ee98c511812bc1582dabdda5602dae70af0ab2852f97ad1fd3d1d1d10d91648. Jan 23 23:56:59.468014 containerd[2035]: time="2026-01-23T23:56:59.467939241Z" level=info msg="StartContainer for \"3ee98c511812bc1582dabdda5602dae70af0ab2852f97ad1fd3d1d1d10d91648\" returns successfully" Jan 23 23:56:59.598241 containerd[2035]: time="2026-01-23T23:56:59.598025938Z" level=info msg="StopPodSandbox for \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\"" Jan 23 23:56:59.598382 containerd[2035]: time="2026-01-23T23:56:59.598272622Z" level=info msg="TearDown network for sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" successfully" Jan 23 23:56:59.598382 containerd[2035]: time="2026-01-23T23:56:59.598301878Z" level=info msg="StopPodSandbox for \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" returns successfully" Jan 23 23:56:59.599272 containerd[2035]: time="2026-01-23T23:56:59.599204962Z" level=info msg="RemovePodSandbox for \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\"" Jan 23 23:56:59.599272 containerd[2035]: time="2026-01-23T23:56:59.599267098Z" level=info msg="Forcibly stopping sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\"" Jan 23 23:56:59.599478 containerd[2035]: time="2026-01-23T23:56:59.599368234Z" level=info msg="TearDown network for sandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" successfully" Jan 23 23:56:59.607866 containerd[2035]: time="2026-01-23T23:56:59.607790950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:59.608038 containerd[2035]: time="2026-01-23T23:56:59.607925314Z" level=info msg="RemovePodSandbox \"5fc9690ef5f15fec46150eb490f9ceb7e17af0f5ebbf1c541e76b4bb45aa3482\" returns successfully" Jan 23 23:56:59.609748 containerd[2035]: time="2026-01-23T23:56:59.609660982Z" level=info msg="StopPodSandbox for \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\"" Jan 23 23:56:59.609935 containerd[2035]: time="2026-01-23T23:56:59.609903538Z" level=info msg="TearDown network for sandbox \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" successfully" Jan 23 23:56:59.610939 containerd[2035]: time="2026-01-23T23:56:59.609931426Z" level=info msg="StopPodSandbox for \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" returns successfully" Jan 23 23:56:59.612140 containerd[2035]: time="2026-01-23T23:56:59.612053266Z" level=info msg="RemovePodSandbox for \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\"" Jan 23 23:56:59.612275 containerd[2035]: time="2026-01-23T23:56:59.612142774Z" level=info msg="Forcibly stopping sandbox \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\"" Jan 23 23:56:59.612335 containerd[2035]: time="2026-01-23T23:56:59.612297982Z" level=info msg="TearDown network for sandbox \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" successfully" Jan 23 23:56:59.622828 containerd[2035]: time="2026-01-23T23:56:59.622683490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:59.623031 containerd[2035]: time="2026-01-23T23:56:59.622831834Z" level=info msg="RemovePodSandbox \"3bd13dd45a5118a50ac77e92e6cdd341e90c151dba3b5e76045cf8cac0a3e557\" returns successfully" Jan 23 23:57:00.336845 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 23:57:03.269293 systemd[1]: run-containerd-runc-k8s.io-3ee98c511812bc1582dabdda5602dae70af0ab2852f97ad1fd3d1d1d10d91648-runc.etnJiS.mount: Deactivated successfully. Jan 23 23:57:04.764489 (udev-worker)[6033]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:04.770668 (udev-worker)[6034]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:04.771521 systemd-networkd[1930]: lxc_health: Link UP Jan 23 23:57:04.795698 systemd-networkd[1930]: lxc_health: Gained carrier Jan 23 23:57:05.034750 kubelet[3406]: I0123 23:57:05.034564 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rs2xb" podStartSLOduration=11.034542769 podStartE2EDuration="11.034542769s" podCreationTimestamp="2026-01-23 23:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:00.363648178 +0000 UTC m=+121.196519931" watchObservedRunningTime="2026-01-23 23:57:05.034542769 +0000 UTC m=+125.867414378" Jan 23 23:57:06.548175 systemd-networkd[1930]: lxc_health: Gained IPv6LL Jan 23 23:57:08.870252 ntpd[1992]: Listen normally on 15 lxc_health [fe80::94e8:aaff:fed5:841c%14]:123 Jan 23 23:57:08.870935 ntpd[1992]: 23 Jan 23:57:08 ntpd[1992]: Listen normally on 15 lxc_health [fe80::94e8:aaff:fed5:841c%14]:123 Jan 23 23:57:10.194561 systemd[1]: run-containerd-runc-k8s.io-3ee98c511812bc1582dabdda5602dae70af0ab2852f97ad1fd3d1d1d10d91648-runc.GpmAns.mount: Deactivated successfully. Jan 23 23:57:10.393911 sshd[5305]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:10.402513 systemd-logind[1998]: Session 27 logged out. Waiting for processes to exit. Jan 23 23:57:10.402904 systemd[1]: sshd@26-172.31.20.17:22-4.153.228.146:53578.service: Deactivated successfully. Jan 23 23:57:10.410037 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 23:57:10.419672 systemd-logind[1998]: Removed session 27. Jan 23 23:57:59.733479 systemd[1]: cri-containerd-ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c.scope: Deactivated successfully. Jan 23 23:57:59.734741 systemd[1]: cri-containerd-ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c.scope: Consumed 5.479s CPU time, 17.2M memory peak, 0B memory swap peak. Jan 23 23:57:59.783454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c-rootfs.mount: Deactivated successfully. Jan 23 23:57:59.790168 containerd[2035]: time="2026-01-23T23:57:59.789929805Z" level=info msg="shim disconnected" id=ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c namespace=k8s.io Jan 23 23:57:59.790168 containerd[2035]: time="2026-01-23T23:57:59.790032765Z" level=warning msg="cleaning up after shim disconnected" id=ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c namespace=k8s.io Jan 23 23:57:59.790168 containerd[2035]: time="2026-01-23T23:57:59.790053765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:00.490585 kubelet[3406]: I0123 23:58:00.489536 3406 scope.go:117] "RemoveContainer" containerID="ac721f7eee77158770bf2a43acfef2d8d9e758d8b3b6de1ebb7c29de78c4613c" Jan 23 23:58:00.494133 containerd[2035]: time="2026-01-23T23:58:00.494060588Z" level=info msg="CreateContainer within sandbox \"068d607168c6f71e429158530f45515c7b6ce95ddd649b609399cecd0954183a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:58:00.529298 containerd[2035]: time="2026-01-23T23:58:00.529100660Z" level=info msg="CreateContainer within sandbox \"068d607168c6f71e429158530f45515c7b6ce95ddd649b609399cecd0954183a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"366ad270d56a4f441c143e6ef6f4f96b048c36d5f68d9c22914526dd8a4454d6\"" Jan 23 23:58:00.530286 containerd[2035]: time="2026-01-23T23:58:00.530227593Z" level=info msg="StartContainer for \"366ad270d56a4f441c143e6ef6f4f96b048c36d5f68d9c22914526dd8a4454d6\"" Jan 23 23:58:00.584116 systemd[1]: Started cri-containerd-366ad270d56a4f441c143e6ef6f4f96b048c36d5f68d9c22914526dd8a4454d6.scope - libcontainer container 366ad270d56a4f441c143e6ef6f4f96b048c36d5f68d9c22914526dd8a4454d6. Jan 23 23:58:00.661748 containerd[2035]: time="2026-01-23T23:58:00.661397613Z" level=info msg="StartContainer for \"366ad270d56a4f441c143e6ef6f4f96b048c36d5f68d9c22914526dd8a4454d6\" returns successfully" Jan 23 23:58:02.682325 kubelet[3406]: E0123 23:58:02.681553 3406 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-17?timeout=10s\": context deadline exceeded" Jan 23 23:58:03.295839 systemd[1]: cri-containerd-7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961.scope: Deactivated successfully. Jan 23 23:58:03.298918 systemd[1]: cri-containerd-7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961.scope: Consumed 5.382s CPU time, 15.9M memory peak, 0B memory swap peak. Jan 23 23:58:03.350860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961-rootfs.mount: Deactivated successfully. Jan 23 23:58:03.362104 containerd[2035]: time="2026-01-23T23:58:03.361735643Z" level=info msg="shim disconnected" id=7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961 namespace=k8s.io Jan 23 23:58:03.362104 containerd[2035]: time="2026-01-23T23:58:03.361897607Z" level=warning msg="cleaning up after shim disconnected" id=7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961 namespace=k8s.io Jan 23 23:58:03.362104 containerd[2035]: time="2026-01-23T23:58:03.361918787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:03.504599 kubelet[3406]: I0123 23:58:03.504479 3406 scope.go:117] "RemoveContainer" containerID="7bb70e405829d3afd90421867b7870b3c90afe89751ee2dcb58abef092959961" Jan 23 23:58:03.507966 containerd[2035]: time="2026-01-23T23:58:03.507895163Z" level=info msg="CreateContainer within sandbox \"a69287d3df9b887f5d2f62278f9b6ae5e3c0e2361fece950f76121b8a5cd1d95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:58:03.532030 containerd[2035]: time="2026-01-23T23:58:03.531950771Z" level=info msg="CreateContainer within sandbox \"a69287d3df9b887f5d2f62278f9b6ae5e3c0e2361fece950f76121b8a5cd1d95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e89a76af24427154228fb01217540a1ba2986495bca06a5055e903d9af210c62\"" Jan 23 23:58:03.532807 containerd[2035]: time="2026-01-23T23:58:03.532743659Z" level=info msg="StartContainer for \"e89a76af24427154228fb01217540a1ba2986495bca06a5055e903d9af210c62\"" Jan 23 23:58:03.598093 systemd[1]: Started cri-containerd-e89a76af24427154228fb01217540a1ba2986495bca06a5055e903d9af210c62.scope - libcontainer container e89a76af24427154228fb01217540a1ba2986495bca06a5055e903d9af210c62. Jan 23 23:58:03.677586 containerd[2035]: time="2026-01-23T23:58:03.677513592Z" level=info msg="StartContainer for \"e89a76af24427154228fb01217540a1ba2986495bca06a5055e903d9af210c62\" returns successfully" Jan 23 23:58:12.682465 kubelet[3406]: E0123 23:58:12.682267 3406 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-17?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"