Jan 23 23:55:37.255782 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:55:37.255827 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:55:37.255852 kernel: KASLR disabled due to lack of seed Jan 23 23:55:37.255869 kernel: efi: EFI v2.7 by EDK II Jan 23 23:55:37.255886 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:55:37.255901 kernel: ACPI: Early table checksum verification disabled Jan 23 23:55:37.255919 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:55:37.255936 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:55:37.255952 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:55:37.255968 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:55:37.255989 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:55:37.256005 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:55:37.256021 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:55:37.256037 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:55:37.256056 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:55:37.256078 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:55:37.256096 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:55:37.256113 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:55:37.256130 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:55:37.256147 kernel: printk: bootconsole [uart0] enabled Jan 23 23:55:37.256165 kernel: NUMA: Failed to initialise from firmware Jan 23 23:55:37.256183 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:37.256224 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:55:37.256243 kernel: Zone ranges: Jan 23 23:55:37.256260 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:55:37.256277 kernel: DMA32 empty Jan 23 23:55:37.256300 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:55:37.256318 kernel: Movable zone start for each node Jan 23 23:55:37.256335 kernel: Early memory node ranges Jan 23 23:55:37.256353 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:55:37.256370 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:55:37.256387 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:55:37.256403 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:55:37.256420 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:55:37.256437 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:55:37.256453 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:55:37.256493 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:55:37.256513 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:37.256536 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:55:37.256553 kernel: psci: probing for conduit method from ACPI. Jan 23 23:55:37.256577 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:55:37.256595 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:55:37.256613 kernel: psci: Trusted OS migration not required Jan 23 23:55:37.256635 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:55:37.256653 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:55:37.256670 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:55:37.256687 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:55:37.256705 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:55:37.256723 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:55:37.256740 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:55:37.256758 kernel: CPU features: detected: Spectre-v2 Jan 23 23:55:37.256775 kernel: CPU features: detected: Spectre-v3a Jan 23 23:55:37.256792 kernel: CPU features: detected: Spectre-BHB Jan 23 23:55:37.256810 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:55:37.256831 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:55:37.256849 kernel: alternatives: applying boot alternatives Jan 23 23:55:37.256869 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:37.256887 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:55:37.256905 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:55:37.256923 kernel: Fallback order for Node 0: 0 Jan 23 23:55:37.256940 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:55:37.256958 kernel: Policy zone: Normal Jan 23 23:55:37.256975 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:55:37.256992 kernel: software IO TLB: area num 2. Jan 23 23:55:37.257010 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:55:37.257032 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:55:37.257050 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:55:37.257068 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:55:37.257086 kernel: rcu: RCU event tracing is enabled. Jan 23 23:55:37.257104 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:55:37.257122 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:55:37.257139 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:55:37.257157 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:55:37.257174 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:55:37.257192 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:55:37.257209 kernel: GICv3: 96 SPIs implemented Jan 23 23:55:37.257231 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:55:37.257248 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:55:37.257265 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:55:37.257283 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:55:37.257300 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:55:37.257317 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:55:37.257336 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:55:37.257353 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:55:37.257370 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:55:37.257388 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:55:37.257405 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:55:37.257423 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:55:37.257445 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:55:37.259518 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:55:37.259560 kernel: Console: colour dummy device 80x25 Jan 23 23:55:37.259579 kernel: printk: console [tty1] enabled Jan 23 23:55:37.259598 kernel: ACPI: Core revision 20230628 Jan 23 23:55:37.259617 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:55:37.259636 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:55:37.259654 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:55:37.259672 kernel: landlock: Up and running. Jan 23 23:55:37.259699 kernel: SELinux: Initializing. Jan 23 23:55:37.259718 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:37.259736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:37.259755 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:37.259774 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:37.259792 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:55:37.259811 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:55:37.259829 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:55:37.259847 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:55:37.259869 kernel: Remapping and enabling EFI services. Jan 23 23:55:37.259887 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:55:37.259905 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:55:37.259923 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:55:37.259941 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:55:37.259959 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:55:37.259977 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:55:37.259994 kernel: SMP: Total of 2 processors activated. Jan 23 23:55:37.260012 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:55:37.260034 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:55:37.260052 kernel: CPU features: detected: CRC32 instructions Jan 23 23:55:37.260070 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:55:37.260098 kernel: alternatives: applying system-wide alternatives Jan 23 23:55:37.260121 kernel: devtmpfs: initialized Jan 23 23:55:37.260140 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:55:37.260158 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:55:37.260177 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:55:37.260217 kernel: SMBIOS 3.0.0 present. Jan 23 23:55:37.260244 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:55:37.260263 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:55:37.260281 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:55:37.260300 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:55:37.260319 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:55:37.260338 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:55:37.260357 kernel: audit: type=2000 audit(0.284:1): state=initialized audit_enabled=0 res=1 Jan 23 23:55:37.260376 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:55:37.260399 kernel: cpuidle: using governor menu Jan 23 23:55:37.260417 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:55:37.260436 kernel: ASID allocator initialised with 65536 entries Jan 23 23:55:37.260455 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:55:37.260496 kernel: Serial: AMBA PL011 UART driver Jan 23 23:55:37.260517 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:55:37.262536 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:55:37.262577 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:55:37.262597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:55:37.262630 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:55:37.262649 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:55:37.262669 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:55:37.262689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:55:37.262708 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:55:37.262727 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:55:37.262746 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:55:37.262765 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:55:37.262784 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:55:37.262808 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:55:37.262827 kernel: ACPI: Interpreter enabled Jan 23 23:55:37.262846 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:55:37.262865 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:55:37.262884 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:55:37.263214 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:55:37.263435 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:55:37.264726 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:55:37.264954 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:55:37.265168 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:55:37.265195 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:55:37.265214 kernel: acpiphp: Slot [1] registered Jan 23 23:55:37.265234 kernel: acpiphp: Slot [2] registered Jan 23 23:55:37.265253 kernel: acpiphp: Slot [3] registered Jan 23 23:55:37.265271 kernel: acpiphp: Slot [4] registered Jan 23 23:55:37.265290 kernel: acpiphp: Slot [5] registered Jan 23 23:55:37.265314 kernel: acpiphp: Slot [6] registered Jan 23 23:55:37.265333 kernel: acpiphp: Slot [7] registered Jan 23 23:55:37.265351 kernel: acpiphp: Slot [8] registered Jan 23 23:55:37.265370 kernel: acpiphp: Slot [9] registered Jan 23 23:55:37.265388 kernel: acpiphp: Slot [10] registered Jan 23 23:55:37.265407 kernel: acpiphp: Slot [11] registered Jan 23 23:55:37.265426 kernel: acpiphp: Slot [12] registered Jan 23 23:55:37.265444 kernel: acpiphp: Slot [13] registered Jan 23 23:55:37.265480 kernel: acpiphp: Slot [14] registered Jan 23 23:55:37.266560 kernel: acpiphp: Slot [15] registered Jan 23 23:55:37.266594 kernel: acpiphp: Slot [16] registered Jan 23 23:55:37.266614 kernel: acpiphp: Slot [17] registered Jan 23 23:55:37.266634 kernel: acpiphp: Slot [18] registered Jan 23 23:55:37.266653 kernel: acpiphp: Slot [19] registered Jan 23 23:55:37.266671 kernel: acpiphp: Slot [20] registered Jan 23 23:55:37.266690 kernel: acpiphp: Slot [21] registered Jan 23 23:55:37.266709 kernel: acpiphp: Slot [22] registered Jan 23 23:55:37.266727 kernel: acpiphp: Slot [23] registered Jan 23 23:55:37.266746 kernel: acpiphp: Slot [24] registered Jan 23 23:55:37.266769 kernel: acpiphp: Slot [25] registered Jan 23 23:55:37.266789 kernel: acpiphp: Slot [26] registered Jan 23 23:55:37.266808 kernel: acpiphp: Slot [27] registered Jan 23 23:55:37.266826 kernel: acpiphp: Slot [28] registered Jan 23 23:55:37.266845 kernel: acpiphp: Slot [29] registered Jan 23 23:55:37.266863 kernel: acpiphp: Slot [30] registered Jan 23 23:55:37.266882 kernel: acpiphp: Slot [31] registered Jan 23 23:55:37.266901 kernel: PCI host bridge to bus 0000:00 Jan 23 23:55:37.267303 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:55:37.267544 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:55:37.267744 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:37.267938 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:55:37.268182 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:55:37.268438 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:55:37.268681 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:55:37.268918 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:55:37.269135 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:55:37.269343 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:37.269592 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:55:37.269812 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:55:37.270023 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:55:37.270233 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:55:37.270451 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:37.271515 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:55:37.271711 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:55:37.271901 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:37.271927 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:55:37.271947 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:55:37.271966 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:55:37.271985 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:55:37.272013 kernel: iommu: Default domain type: Translated Jan 23 23:55:37.272032 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:55:37.272051 kernel: efivars: Registered efivars operations Jan 23 23:55:37.272069 kernel: vgaarb: loaded Jan 23 23:55:37.272087 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:55:37.272106 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:55:37.272124 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:55:37.272143 kernel: pnp: PnP ACPI init Jan 23 23:55:37.272391 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:55:37.272425 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:55:37.272445 kernel: NET: Registered PF_INET protocol family Jan 23 23:55:37.272479 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:55:37.272504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:55:37.272523 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:55:37.272542 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:55:37.272561 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:55:37.272580 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:55:37.272605 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:37.272624 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:37.272643 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:55:37.272661 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:55:37.272680 kernel: kvm [1]: HYP mode not available Jan 23 23:55:37.272698 kernel: Initialise system trusted keyrings Jan 23 23:55:37.272717 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:55:37.272735 kernel: Key type asymmetric registered Jan 23 23:55:37.272754 kernel: Asymmetric key parser 'x509' registered Jan 23 23:55:37.272777 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:55:37.272796 kernel: io scheduler mq-deadline registered Jan 23 23:55:37.272814 kernel: io scheduler kyber registered Jan 23 23:55:37.272833 kernel: io scheduler bfq registered Jan 23 23:55:37.273058 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:55:37.273087 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:55:37.273107 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:55:37.273126 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:55:37.273145 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:55:37.273170 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:55:37.273190 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:55:37.276040 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:55:37.276072 kernel: printk: console [ttyS0] disabled Jan 23 23:55:37.276092 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:55:37.276111 kernel: printk: console [ttyS0] enabled Jan 23 23:55:37.276130 kernel: printk: bootconsole [uart0] disabled Jan 23 23:55:37.276149 kernel: thunder_xcv, ver 1.0 Jan 23 23:55:37.276168 kernel: thunder_bgx, ver 1.0 Jan 23 23:55:37.276214 kernel: nicpf, ver 1.0 Jan 23 23:55:37.276235 kernel: nicvf, ver 1.0 Jan 23 23:55:37.276460 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:55:37.276723 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:55:36 UTC (1769212536) Jan 23 23:55:37.276750 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:55:37.276769 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:55:37.276788 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:55:37.276807 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:55:37.276834 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:55:37.276853 kernel: Segment Routing with IPv6 Jan 23 23:55:37.276871 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:55:37.276890 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:55:37.276908 kernel: Key type dns_resolver registered Jan 23 23:55:37.276927 kernel: registered taskstats version 1 Jan 23 23:55:37.276945 kernel: Loading compiled-in X.509 certificates Jan 23 23:55:37.276964 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:55:37.276982 kernel: Key type .fscrypt registered Jan 23 23:55:37.277006 kernel: Key type fscrypt-provisioning registered Jan 23 23:55:37.277025 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:55:37.277043 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:55:37.277062 kernel: ima: No architecture policies found Jan 23 23:55:37.277080 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:55:37.277099 kernel: clk: Disabling unused clocks Jan 23 23:55:37.277117 kernel: Freeing unused kernel memory: 39424K Jan 23 23:55:37.277136 kernel: Run /init as init process Jan 23 23:55:37.277154 kernel: with arguments: Jan 23 23:55:37.277177 kernel: /init Jan 23 23:55:37.277195 kernel: with environment: Jan 23 23:55:37.277213 kernel: HOME=/ Jan 23 23:55:37.277232 kernel: TERM=linux Jan 23 23:55:37.277255 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:37.277279 systemd[1]: Detected virtualization amazon. Jan 23 23:55:37.277300 systemd[1]: Detected architecture arm64. Jan 23 23:55:37.277319 systemd[1]: Running in initrd. Jan 23 23:55:37.277344 systemd[1]: No hostname configured, using default hostname. Jan 23 23:55:37.277364 systemd[1]: Hostname set to . Jan 23 23:55:37.277384 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:37.277404 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:55:37.277424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:37.277445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:37.277495 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:55:37.277521 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:37.277549 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:55:37.277570 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:55:37.277593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:55:37.277614 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:55:37.277635 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:37.277655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:37.277680 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:55:37.277701 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:37.277721 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:37.277741 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:55:37.277761 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:37.277781 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:37.277802 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:55:37.277822 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:55:37.277842 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:37.277867 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:37.277888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:37.277908 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:55:37.277928 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:55:37.277948 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:37.277968 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:55:37.277989 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:55:37.278009 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:37.278029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:37.278054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:37.278074 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:37.278094 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:37.278114 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:55:37.278170 systemd-journald[252]: Collecting audit messages is disabled. Jan 23 23:55:37.278220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:55:37.278242 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:37.278263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:55:37.278288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:37.278308 systemd-journald[252]: Journal started Jan 23 23:55:37.278427 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2e3daeced8aecaf35f98cd3a487e28) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:37.233196 systemd-modules-load[253]: Inserted module 'overlay' Jan 23 23:55:37.284517 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:37.290305 kernel: Bridge firewalling registered Jan 23 23:55:37.289450 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 23 23:55:37.293133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:37.305788 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:37.317738 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:37.324755 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:37.330745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:55:37.363941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:37.376270 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:37.387899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:55:37.400330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:37.407764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:37.429746 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:55:37.466065 dracut-cmdline[291]: dracut-dracut-053 Jan 23 23:55:37.473194 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:37.503196 systemd-resolved[285]: Positive Trust Anchors: Jan 23 23:55:37.504159 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:55:37.505380 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:55:37.631916 kernel: SCSI subsystem initialized Jan 23 23:55:37.639590 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:55:37.651581 kernel: iscsi: registered transport (tcp) Jan 23 23:55:37.674579 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:55:37.674652 kernel: QLogic iSCSI HBA Driver Jan 23 23:55:37.757613 kernel: random: crng init done Jan 23 23:55:37.756807 systemd-resolved[285]: Defaulting to hostname 'linux'. Jan 23 23:55:37.761187 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:55:37.763918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:37.791416 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:37.802724 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:55:37.841371 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:55:37.841448 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:55:37.841498 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:55:37.909510 kernel: raid6: neonx8 gen() 6666 MB/s Jan 23 23:55:37.927499 kernel: raid6: neonx4 gen() 6496 MB/s Jan 23 23:55:37.945499 kernel: raid6: neonx2 gen() 5438 MB/s Jan 23 23:55:37.963500 kernel: raid6: neonx1 gen() 3955 MB/s Jan 23 23:55:37.980498 kernel: raid6: int64x8 gen() 3804 MB/s Jan 23 23:55:37.997498 kernel: raid6: int64x4 gen() 3723 MB/s Jan 23 23:55:38.015499 kernel: raid6: int64x2 gen() 3594 MB/s Jan 23 23:55:38.033726 kernel: raid6: int64x1 gen() 2772 MB/s Jan 23 23:55:38.033770 kernel: raid6: using algorithm neonx8 gen() 6666 MB/s Jan 23 23:55:38.052519 kernel: raid6: .... xor() 4920 MB/s, rmw enabled Jan 23 23:55:38.052556 kernel: raid6: using neon recovery algorithm Jan 23 23:55:38.060503 kernel: xor: measuring software checksum speed Jan 23 23:55:38.062762 kernel: 8regs : 10280 MB/sec Jan 23 23:55:38.062795 kernel: 32regs : 11900 MB/sec Jan 23 23:55:38.064087 kernel: arm64_neon : 9502 MB/sec Jan 23 23:55:38.064120 kernel: xor: using function: 32regs (11900 MB/sec) Jan 23 23:55:38.148525 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:55:38.167615 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:38.178823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:38.220098 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 23 23:55:38.229857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:38.246718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:55:38.283487 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jan 23 23:55:38.340272 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:38.349772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:38.471294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:38.485955 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:55:38.534697 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:38.537574 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:38.540496 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:38.545410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:38.566065 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:55:38.608537 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:38.673525 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:55:38.673592 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:55:38.680355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:38.685528 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:55:38.685831 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:55:38.681643 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:38.692229 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:38.710584 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:93:9a:bb:3e:1d Jan 23 23:55:38.694933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:38.695260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:38.701856 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:38.713242 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:38.728549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:38.750525 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:55:38.750586 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:55:38.763739 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:55:38.775280 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:55:38.775360 kernel: GPT:9289727 != 33554431 Jan 23 23:55:38.775387 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:55:38.776348 kernel: GPT:9289727 != 33554431 Jan 23 23:55:38.777798 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:55:38.777846 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:38.778376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:38.796771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:38.843294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:38.857346 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (520) Jan 23 23:55:38.923512 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (530) Jan 23 23:55:38.970872 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:55:38.992964 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:55:39.023048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:55:39.037989 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:39.045030 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:39.067779 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:55:39.080697 disk-uuid[663]: Primary Header is updated. Jan 23 23:55:39.080697 disk-uuid[663]: Secondary Entries is updated. Jan 23 23:55:39.080697 disk-uuid[663]: Secondary Header is updated. Jan 23 23:55:39.091536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:39.101500 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:39.108506 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:40.112489 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:40.112560 disk-uuid[664]: The operation has completed successfully. Jan 23 23:55:40.296265 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:55:40.298687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:55:40.346776 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:55:40.370111 sh[1006]: Success Jan 23 23:55:40.396507 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:55:40.498280 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:55:40.515686 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:55:40.522135 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:55:40.550972 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:55:40.551033 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:40.553211 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:55:40.553247 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:55:40.554826 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:55:40.650512 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:55:40.664454 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:55:40.669090 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:55:40.682718 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:55:40.693261 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:55:40.713867 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:40.713937 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:40.713965 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:40.721522 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:40.740857 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:55:40.744707 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:40.754384 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:55:40.766636 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:55:40.892774 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:40.908789 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:55:40.971691 systemd-networkd[1206]: lo: Link UP Jan 23 23:55:40.972144 systemd-networkd[1206]: lo: Gained carrier Jan 23 23:55:40.975630 systemd-networkd[1206]: Enumeration completed Jan 23 23:55:40.976335 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:55:40.977945 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:40.977952 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:55:40.980321 systemd[1]: Reached target network.target - Network. Jan 23 23:55:40.988866 systemd-networkd[1206]: eth0: Link UP Jan 23 23:55:40.988874 systemd-networkd[1206]: eth0: Gained carrier Jan 23 23:55:40.988891 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:41.017245 systemd-networkd[1206]: eth0: DHCPv4 address 172.31.27.234/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:55:41.210650 ignition[1101]: Ignition 2.19.0 Jan 23 23:55:41.210670 ignition[1101]: Stage: fetch-offline Jan 23 23:55:41.212312 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:41.212336 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:41.213350 ignition[1101]: Ignition finished successfully Jan 23 23:55:41.225649 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:41.242735 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:55:41.268151 ignition[1216]: Ignition 2.19.0 Jan 23 23:55:41.268191 ignition[1216]: Stage: fetch Jan 23 23:55:41.269346 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:41.269372 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:41.269574 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:41.295495 ignition[1216]: PUT result: OK Jan 23 23:55:41.298834 ignition[1216]: parsed url from cmdline: "" Jan 23 23:55:41.298857 ignition[1216]: no config URL provided Jan 23 23:55:41.298877 ignition[1216]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:55:41.298930 ignition[1216]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:55:41.298967 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:41.303300 ignition[1216]: PUT result: OK Jan 23 23:55:41.303375 ignition[1216]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:55:41.305865 ignition[1216]: GET result: OK Jan 23 23:55:41.308259 ignition[1216]: parsing config with SHA512: cceefddc9771b0e4deade21529221cb002bd67297065e12cfc6a8294ca3646876120bd9eef0468312c3a955efed3cb34c6260edc62d06cab284469ef54b1ee35 Jan 23 23:55:41.319227 unknown[1216]: fetched base config from "system" Jan 23 23:55:41.319260 unknown[1216]: fetched base config from "system" Jan 23 23:55:41.319275 unknown[1216]: fetched user config from "aws" Jan 23 23:55:41.326602 ignition[1216]: fetch: fetch complete Jan 23 23:55:41.326630 ignition[1216]: fetch: fetch passed Jan 23 23:55:41.326735 ignition[1216]: Ignition finished successfully Jan 23 23:55:41.332846 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:55:41.349693 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:55:41.371992 ignition[1223]: Ignition 2.19.0 Jan 23 23:55:41.372743 ignition[1223]: Stage: kargs Jan 23 23:55:41.373387 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:41.373411 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:41.373588 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:41.382809 ignition[1223]: PUT result: OK Jan 23 23:55:41.387616 ignition[1223]: kargs: kargs passed Jan 23 23:55:41.387901 ignition[1223]: Ignition finished successfully Jan 23 23:55:41.394532 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:55:41.404882 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:55:41.428853 ignition[1229]: Ignition 2.19.0 Jan 23 23:55:41.428873 ignition[1229]: Stage: disks Jan 23 23:55:41.429996 ignition[1229]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:41.430024 ignition[1229]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:41.430189 ignition[1229]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:41.434904 ignition[1229]: PUT result: OK Jan 23 23:55:41.443995 ignition[1229]: disks: disks passed Jan 23 23:55:41.444099 ignition[1229]: Ignition finished successfully Jan 23 23:55:41.447918 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:55:41.460626 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:41.463278 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:55:41.465986 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:55:41.468310 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:55:41.473481 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:55:41.491820 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:55:41.532885 systemd-fsck[1237]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:55:41.537335 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:55:41.549800 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:55:41.632522 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:55:41.632723 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:55:41.636908 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:55:41.651634 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:41.659653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:55:41.673054 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:55:41.673156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:55:41.673208 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:41.681924 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:55:41.698667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:55:41.707500 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1256) Jan 23 23:55:41.711554 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:41.711624 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:41.711652 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:41.721516 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:41.723888 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:41.985789 initrd-setup-root[1280]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:55:42.006456 initrd-setup-root[1287]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:55:42.015378 initrd-setup-root[1294]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:55:42.024650 initrd-setup-root[1301]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:55:42.340856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:42.351666 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:55:42.367710 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:55:42.387020 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:55:42.389600 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:42.411704 systemd-networkd[1206]: eth0: Gained IPv6LL Jan 23 23:55:42.422707 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:55:42.440362 ignition[1370]: INFO : Ignition 2.19.0 Jan 23 23:55:42.442602 ignition[1370]: INFO : Stage: mount Jan 23 23:55:42.444345 ignition[1370]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:42.444345 ignition[1370]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:42.444345 ignition[1370]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:42.452208 ignition[1370]: INFO : PUT result: OK Jan 23 23:55:42.456877 ignition[1370]: INFO : mount: mount passed Jan 23 23:55:42.458683 ignition[1370]: INFO : Ignition finished successfully Jan 23 23:55:42.462767 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:55:42.472702 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:55:42.645102 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:42.667496 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1380) Jan 23 23:55:42.671649 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:42.671699 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:42.671725 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:42.678522 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:42.682036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:42.725105 ignition[1397]: INFO : Ignition 2.19.0 Jan 23 23:55:42.727275 ignition[1397]: INFO : Stage: files Jan 23 23:55:42.727275 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:42.727275 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:42.727275 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:42.736734 ignition[1397]: INFO : PUT result: OK Jan 23 23:55:42.741824 ignition[1397]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:55:42.744918 ignition[1397]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:55:42.744918 ignition[1397]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:55:42.806252 ignition[1397]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:55:42.809772 ignition[1397]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:55:42.813192 unknown[1397]: wrote ssh authorized keys file for user: core Jan 23 23:55:42.816687 ignition[1397]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:55:42.820724 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:55:42.825207 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:55:42.921145 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:55:43.081992 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:55:43.081992 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:55:43.081992 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 23:55:43.278497 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:55:43.387010 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:43.391142 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 23:55:43.794125 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 23:55:44.142575 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:44.147672 ignition[1397]: INFO : files: files passed Jan 23 23:55:44.147672 ignition[1397]: INFO : Ignition finished successfully Jan 23 23:55:44.180958 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:55:44.191948 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:55:44.199118 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:55:44.209206 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:55:44.209454 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:55:44.236166 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:44.236166 initrd-setup-root-after-ignition[1426]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:44.246041 initrd-setup-root-after-ignition[1430]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:44.251854 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:44.259173 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:55:44.272732 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:55:44.321953 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:55:44.322300 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:55:44.327812 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:55:44.331144 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:55:44.335447 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:55:44.348301 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:55:44.388983 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:44.401911 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:55:44.425739 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:44.431204 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:44.436743 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:55:44.442085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:55:44.443587 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:44.449976 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:55:44.450277 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:55:44.456784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:55:44.459581 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:44.467350 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:44.474262 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:55:44.477439 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:44.485060 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:55:44.488026 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:55:44.494670 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:55:44.496617 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:55:44.496843 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:44.499596 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:44.502274 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:44.516275 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:55:44.516444 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:44.521456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:55:44.521804 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:44.531654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:55:44.534291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:44.540164 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:55:44.540377 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:55:44.552860 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:55:44.557570 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:55:44.557858 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:44.571219 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:55:44.575654 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:55:44.576599 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:44.586335 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:55:44.587448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:44.611883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:55:44.613564 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:55:44.632674 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:55:44.638082 ignition[1450]: INFO : Ignition 2.19.0 Jan 23 23:55:44.638082 ignition[1450]: INFO : Stage: umount Jan 23 23:55:44.638082 ignition[1450]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:44.638082 ignition[1450]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:44.638082 ignition[1450]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:44.638082 ignition[1450]: INFO : PUT result: OK Jan 23 23:55:44.651601 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:55:44.671810 ignition[1450]: INFO : umount: umount passed Jan 23 23:55:44.671810 ignition[1450]: INFO : Ignition finished successfully Jan 23 23:55:44.651881 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:55:44.662685 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:55:44.663081 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:55:44.673650 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:55:44.673824 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:55:44.675819 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:55:44.675899 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:55:44.682584 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:55:44.682672 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:55:44.686629 systemd[1]: Stopped target network.target - Network. Jan 23 23:55:44.688621 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:55:44.688706 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:44.691435 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:55:44.693518 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:55:44.701801 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:44.705274 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:55:44.707392 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:55:44.711493 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:55:44.711572 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:44.718035 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:55:44.718106 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:44.722023 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:55:44.722569 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:55:44.724311 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:55:44.724389 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:44.729056 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:55:44.729133 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:44.735941 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:55:44.739187 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:55:44.753515 systemd-networkd[1206]: eth0: DHCPv6 lease lost Jan 23 23:55:44.759424 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:55:44.774647 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:55:44.779383 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:55:44.779526 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:44.802438 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:55:44.806551 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:55:44.808519 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:44.812107 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:44.822594 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:55:44.827977 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:55:44.844310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:55:44.846668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:44.852693 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:55:44.852816 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:44.859918 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:55:44.860153 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:44.868915 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:55:44.869729 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:44.892719 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:55:44.893435 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:44.903049 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:55:44.903125 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:44.905879 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:55:44.905976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:44.908682 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:55:44.908766 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:44.916116 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:44.916217 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:44.935284 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:55:44.946030 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:55:44.946144 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:44.949202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:44.949293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:44.962903 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:55:44.963165 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:55:44.981519 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:55:44.981930 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:55:44.991301 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:55:45.004782 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:55:45.023268 systemd[1]: Switching root. Jan 23 23:55:45.059381 systemd-journald[252]: Journal stopped Jan 23 23:55:47.088825 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 23 23:55:47.088972 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:55:47.089022 kernel: SELinux: policy capability open_perms=1 Jan 23 23:55:47.089052 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:55:47.089083 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:55:47.089113 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:55:47.089145 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:55:47.089175 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:55:47.089205 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:55:47.089238 kernel: audit: type=1403 audit(1769212545.516:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:55:47.089282 systemd[1]: Successfully loaded SELinux policy in 50.675ms. Jan 23 23:55:47.089332 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.159ms. Jan 23 23:55:47.089368 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:47.089402 systemd[1]: Detected virtualization amazon. Jan 23 23:55:47.089435 systemd[1]: Detected architecture arm64. Jan 23 23:55:47.089884 systemd[1]: Detected first boot. Jan 23 23:55:47.089930 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:47.089965 zram_generator::config[1492]: No configuration found. Jan 23 23:55:47.090010 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:55:47.090042 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:55:47.090075 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:55:47.090110 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:55:47.090143 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:55:47.090186 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:55:47.090219 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:55:47.090251 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:55:47.090286 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:55:47.090320 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:55:47.090352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:55:47.090384 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:55:47.090415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:47.090447 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:47.090502 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:55:47.090537 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:55:47.090568 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:55:47.090605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:47.090637 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:55:47.090668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:47.090697 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:55:47.090726 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:55:47.090756 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:55:47.090785 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:55:47.090824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:47.090857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:47.090888 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:47.090919 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:47.090951 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:55:47.090981 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:55:47.091010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:47.091040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:47.091071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:47.091101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:55:47.091135 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:55:47.091167 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:55:47.091196 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:55:47.091225 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:55:47.091257 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:55:47.091287 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:55:47.091319 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:55:47.091351 systemd[1]: Reached target machines.target - Containers. Jan 23 23:55:47.091386 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:55:47.091419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:47.091451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:47.091532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:55:47.091568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:47.091599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:55:47.091631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:55:47.091664 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:55:47.091694 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:55:47.091729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:55:47.091761 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:55:47.091791 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:55:47.091821 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:55:47.091854 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:55:47.091886 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:47.091916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:47.091945 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:55:47.091979 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:55:47.092011 kernel: ACPI: bus type drm_connector registered Jan 23 23:55:47.092043 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:47.092077 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:55:47.092107 systemd[1]: Stopped verity-setup.service. Jan 23 23:55:47.092156 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:55:47.092191 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:55:47.092221 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:55:47.092252 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:55:47.092289 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:55:47.092319 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:55:47.092349 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:47.092379 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:55:47.092408 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:55:47.092443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:47.092500 kernel: fuse: init (API version 7.39) Jan 23 23:55:47.092534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:47.092566 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:55:47.092597 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:55:47.092674 systemd-journald[1570]: Collecting audit messages is disabled. Jan 23 23:55:47.092735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:55:47.092765 systemd-journald[1570]: Journal started Jan 23 23:55:47.092819 systemd-journald[1570]: Runtime Journal (/run/log/journal/ec2e3daeced8aecaf35f98cd3a487e28) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:46.530407 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:55:46.556405 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:55:46.557202 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:55:47.101394 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:55:47.101482 kernel: loop: module loaded Jan 23 23:55:47.102493 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:47.109904 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:55:47.111594 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:55:47.114824 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:55:47.117097 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:55:47.134296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:47.140695 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:55:47.154225 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:55:47.178622 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:55:47.192702 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:55:47.209610 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:55:47.213674 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:55:47.213735 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:55:47.218098 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:55:47.227724 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:55:47.242820 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:55:47.245822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:47.253803 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:55:47.259687 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:55:47.262385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:55:47.267809 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:55:47.270953 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:55:47.275833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:47.283773 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:55:47.293955 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:55:47.297230 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:55:47.300273 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:55:47.303399 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:55:47.332774 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:55:47.336666 systemd-journald[1570]: Time spent on flushing to /var/log/journal/ec2e3daeced8aecaf35f98cd3a487e28 is 44.735ms for 903 entries. Jan 23 23:55:47.336666 systemd-journald[1570]: System Journal (/var/log/journal/ec2e3daeced8aecaf35f98cd3a487e28) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:55:47.391791 systemd-journald[1570]: Received client request to flush runtime journal. Jan 23 23:55:47.391989 kernel: loop0: detected capacity change from 0 to 52536 Jan 23 23:55:47.398272 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:55:47.407663 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:55:47.410975 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:55:47.437509 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:55:47.425975 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:55:47.475542 kernel: loop1: detected capacity change from 0 to 211168 Jan 23 23:55:47.495841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:47.529106 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:55:47.530313 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:55:47.561600 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:47.572585 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:55:47.595325 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:55:47.611786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:47.638161 udevadm[1638]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:55:47.653526 kernel: loop2: detected capacity change from 0 to 114432 Jan 23 23:55:47.706005 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 23 23:55:47.707098 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 23 23:55:47.729629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:47.755702 kernel: loop3: detected capacity change from 0 to 114328 Jan 23 23:55:47.818894 kernel: loop4: detected capacity change from 0 to 52536 Jan 23 23:55:47.840734 kernel: loop5: detected capacity change from 0 to 211168 Jan 23 23:55:47.882191 kernel: loop6: detected capacity change from 0 to 114432 Jan 23 23:55:47.901048 kernel: loop7: detected capacity change from 0 to 114328 Jan 23 23:55:47.919345 (sd-merge)[1646]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:55:47.921797 (sd-merge)[1646]: Merged extensions into '/usr'. Jan 23 23:55:47.935726 systemd[1]: Reloading requested from client PID 1620 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:55:47.935757 systemd[1]: Reloading... Jan 23 23:55:48.141496 zram_generator::config[1675]: No configuration found. Jan 23 23:55:48.230551 ldconfig[1615]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:55:48.446169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:48.566612 systemd[1]: Reloading finished in 629 ms. Jan 23 23:55:48.614640 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:55:48.618210 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:55:48.621748 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:55:48.640874 systemd[1]: Starting ensure-sysext.service... Jan 23 23:55:48.645835 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:55:48.654882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:48.677723 systemd[1]: Reloading requested from client PID 1725 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:55:48.677760 systemd[1]: Reloading... Jan 23 23:55:48.724792 systemd-udevd[1727]: Using default interface naming scheme 'v255'. Jan 23 23:55:48.736454 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:55:48.739211 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:55:48.746269 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:55:48.747980 systemd-tmpfiles[1726]: ACLs are not supported, ignoring. Jan 23 23:55:48.748167 systemd-tmpfiles[1726]: ACLs are not supported, ignoring. Jan 23 23:55:48.761819 systemd-tmpfiles[1726]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:55:48.761845 systemd-tmpfiles[1726]: Skipping /boot Jan 23 23:55:48.823259 systemd-tmpfiles[1726]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:55:48.823296 systemd-tmpfiles[1726]: Skipping /boot Jan 23 23:55:48.848510 zram_generator::config[1764]: No configuration found. Jan 23 23:55:49.082681 (udev-worker)[1751]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:49.166537 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1749) Jan 23 23:55:49.232174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:49.370155 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:55:49.370758 systemd[1]: Reloading finished in 692 ms. Jan 23 23:55:49.408099 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:49.423132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:49.462823 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:49.468767 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:55:49.476865 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:55:49.487887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:55:49.501747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:55:49.509835 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:55:49.527360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:49.535688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:49.570443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:55:49.579799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:55:49.582696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:49.596177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:55:49.615808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:49.616216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:49.633210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:49.639552 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:55:49.642203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:49.642634 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:55:49.657254 systemd[1]: Finished ensure-sysext.service. Jan 23 23:55:49.699295 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:55:49.703268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:49.704823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:49.708982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:55:49.710566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:55:49.748407 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:55:49.755398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:49.757971 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:55:49.765489 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:55:49.826903 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:55:49.840978 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:55:49.846590 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:55:49.847064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:55:49.850292 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:55:49.853611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:55:49.874791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:55:49.890035 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:55:49.893429 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:55:49.929072 augenrules[1959]: No rules Jan 23 23:55:49.932309 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:55:49.936558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:55:49.937923 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:49.951540 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:55:49.963729 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:55:49.987079 lvm[1969]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:55:49.999731 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:55:50.037577 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:55:50.041811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:50.058712 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:55:50.078607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:50.096627 lvm[1979]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:55:50.149706 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:55:50.163206 systemd-resolved[1919]: Positive Trust Anchors: Jan 23 23:55:50.163757 systemd-resolved[1919]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:55:50.163920 systemd-resolved[1919]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:55:50.178046 systemd-resolved[1919]: Defaulting to hostname 'linux'. Jan 23 23:55:50.181009 systemd-networkd[1917]: lo: Link UP Jan 23 23:55:50.181029 systemd-networkd[1917]: lo: Gained carrier Jan 23 23:55:50.181562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:55:50.184635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:50.187393 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:55:50.190008 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:55:50.193069 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:55:50.196235 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:55:50.198851 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:55:50.201812 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:55:50.204756 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:55:50.204804 systemd-networkd[1917]: Enumeration completed Jan 23 23:55:50.205032 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:55:50.207126 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:55:50.210281 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:50.210302 systemd-networkd[1917]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:55:50.210506 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:55:50.216062 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:55:50.224306 systemd-networkd[1917]: eth0: Link UP Jan 23 23:55:50.227741 systemd-networkd[1917]: eth0: Gained carrier Jan 23 23:55:50.227765 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:55:50.227791 systemd-networkd[1917]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:50.233644 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:55:50.236548 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:55:50.239244 systemd[1]: Reached target network.target - Network. Jan 23 23:55:50.244283 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:55:50.246608 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:55:50.248859 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:55:50.248922 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:55:50.262676 systemd-networkd[1917]: eth0: DHCPv4 address 172.31.27.234/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:55:50.264088 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:55:50.270788 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:55:50.280900 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:55:50.289628 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:55:50.297644 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:55:50.300555 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:55:50.310831 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:55:50.318830 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:55:50.329682 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:55:50.352742 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:55:50.362897 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:55:50.367694 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:55:50.378865 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:55:50.384777 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:55:50.389345 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:55:50.393688 jq[1988]: false Jan 23 23:55:50.391326 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:55:50.395785 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:55:50.401615 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:55:50.430825 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:55:50.433646 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:55:50.437341 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:55:50.440601 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:55:50.473996 dbus-daemon[1987]: [system] SELinux support is enabled Jan 23 23:55:50.479801 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:55:50.487780 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:55:50.487830 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:55:50.488677 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:55:50.488713 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:55:50.501788 dbus-daemon[1987]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1917 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:55:50.521297 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:55:50.526920 jq[2000]: true Jan 23 23:55:50.536764 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: ---------------------------------------------------- Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: corporation. Support and training for ntp-4 are Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: available at https://www.nwtime.org/support Jan 23 23:55:50.537411 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: ---------------------------------------------------- Jan 23 23:55:50.536825 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:55:50.536847 ntpd[1991]: ---------------------------------------------------- Jan 23 23:55:50.536866 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:55:50.536884 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:55:50.536903 ntpd[1991]: corporation. Support and training for ntp-4 are Jan 23 23:55:50.538771 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:55:50.570815 tar[2002]: linux-arm64/LICENSE Jan 23 23:55:50.570815 tar[2002]: linux-arm64/helm Jan 23 23:55:50.571243 extend-filesystems[1989]: Found loop4 Jan 23 23:55:50.571243 extend-filesystems[1989]: Found loop5 Jan 23 23:55:50.571243 extend-filesystems[1989]: Found loop6 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: proto: precision = 0.096 usec (-23) Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: basedate set to 2026-01-11 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: gps base set to 2026-01-11 (week 2401) Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Listen normally on 3 eth0 172.31.27.234:123 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Listen normally on 4 lo [::1]:123 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: bind(21) AF_INET6 fe80::493:9aff:febb:3e1d%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: unable to create socket on eth0 (5) for fe80::493:9aff:febb:3e1d%2#123 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: failed to init interface for address fe80::493:9aff:febb:3e1d%2 Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: Listening on routing socket on fd #21 for interface updates Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:50.601002 ntpd[1991]: 23 Jan 23:55:50 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:50.536922 ntpd[1991]: available at https://www.nwtime.org/support Jan 23 23:55:50.601917 extend-filesystems[1989]: Found loop7 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1p1 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1p2 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1p3 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found usr Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1p4 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1p6 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1p7 Jan 23 23:55:50.601917 extend-filesystems[1989]: Found nvme0n1p9 Jan 23 23:55:50.601917 extend-filesystems[1989]: Checking size of /dev/nvme0n1p9 Jan 23 23:55:50.536941 ntpd[1991]: ---------------------------------------------------- Jan 23 23:55:50.644808 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:55:50.542539 ntpd[1991]: proto: precision = 0.096 usec (-23) Jan 23 23:55:50.657576 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:55:50.544746 ntpd[1991]: basedate set to 2026-01-11 Jan 23 23:55:50.544782 ntpd[1991]: gps base set to 2026-01-11 (week 2401) Jan 23 23:55:50.547317 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:55:50.547395 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:55:50.547727 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:55:50.547792 ntpd[1991]: Listen normally on 3 eth0 172.31.27.234:123 Jan 23 23:55:50.547858 ntpd[1991]: Listen normally on 4 lo [::1]:123 Jan 23 23:55:50.547938 ntpd[1991]: bind(21) AF_INET6 fe80::493:9aff:febb:3e1d%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:50.547978 ntpd[1991]: unable to create socket on eth0 (5) for fe80::493:9aff:febb:3e1d%2#123 Jan 23 23:55:50.548006 ntpd[1991]: failed to init interface for address fe80::493:9aff:febb:3e1d%2 Jan 23 23:55:50.548059 ntpd[1991]: Listening on routing socket on fd #21 for interface updates Jan 23 23:55:50.559166 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:50.559216 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:50.696446 (ntainerd)[2027]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:55:50.701718 extend-filesystems[1989]: Resized partition /dev/nvme0n1p9 Jan 23 23:55:50.709656 extend-filesystems[2035]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:55:50.727579 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:55:50.742679 jq[2024]: true Jan 23 23:55:50.846304 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:55:50.863635 update_engine[1999]: I20260123 23:55:50.854324 1999 main.cc:92] Flatcar Update Engine starting Jan 23 23:55:50.869109 extend-filesystems[2035]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:55:50.869109 extend-filesystems[2035]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:55:50.869109 extend-filesystems[2035]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:55:50.909523 extend-filesystems[1989]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:55:50.873178 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:55:50.875559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:55:50.881538 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:55:50.895604 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:55:50.923654 update_engine[1999]: I20260123 23:55:50.915342 1999 update_check_scheduler.cc:74] Next update check in 10m1s Jan 23 23:55:50.925769 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:55:50.972635 systemd-logind[1997]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:55:50.972687 systemd-logind[1997]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:55:50.973014 systemd-logind[1997]: New seat seat0. Jan 23 23:55:50.983617 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:55:51.010632 coreos-metadata[1986]: Jan 23 23:55:51.010 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:55:51.013526 coreos-metadata[1986]: Jan 23 23:55:51.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:55:51.020027 coreos-metadata[1986]: Jan 23 23:55:51.016 INFO Fetch successful Jan 23 23:55:51.020027 coreos-metadata[1986]: Jan 23 23:55:51.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:55:51.026179 coreos-metadata[1986]: Jan 23 23:55:51.026 INFO Fetch successful Jan 23 23:55:51.026179 coreos-metadata[1986]: Jan 23 23:55:51.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:55:51.033418 coreos-metadata[1986]: Jan 23 23:55:51.032 INFO Fetch successful Jan 23 23:55:51.033418 coreos-metadata[1986]: Jan 23 23:55:51.032 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:55:51.037294 coreos-metadata[1986]: Jan 23 23:55:51.037 INFO Fetch successful Jan 23 23:55:51.037294 coreos-metadata[1986]: Jan 23 23:55:51.037 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:55:51.042682 coreos-metadata[1986]: Jan 23 23:55:51.042 INFO Fetch failed with 404: resource not found Jan 23 23:55:51.042682 coreos-metadata[1986]: Jan 23 23:55:51.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:55:51.047112 coreos-metadata[1986]: Jan 23 23:55:51.046 INFO Fetch successful Jan 23 23:55:51.047112 coreos-metadata[1986]: Jan 23 23:55:51.047 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:55:51.056163 coreos-metadata[1986]: Jan 23 23:55:51.052 INFO Fetch successful Jan 23 23:55:51.056163 coreos-metadata[1986]: Jan 23 23:55:51.052 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:55:51.056163 coreos-metadata[1986]: Jan 23 23:55:51.055 INFO Fetch successful Jan 23 23:55:51.056163 coreos-metadata[1986]: Jan 23 23:55:51.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:55:51.057004 coreos-metadata[1986]: Jan 23 23:55:51.056 INFO Fetch successful Jan 23 23:55:51.057004 coreos-metadata[1986]: Jan 23 23:55:51.056 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:55:51.058956 coreos-metadata[1986]: Jan 23 23:55:51.058 INFO Fetch successful Jan 23 23:55:51.104081 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:55:51.112605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:55:51.137118 systemd[1]: Starting sshkeys.service... Jan 23 23:55:51.173440 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:55:51.174040 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:55:51.175914 dbus-daemon[1987]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:55:51.182924 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:55:51.228551 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:55:51.242530 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:55:51.252084 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:55:51.255212 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:55:51.274820 polkitd[2082]: Started polkitd version 121 Jan 23 23:55:51.308490 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1768) Jan 23 23:55:51.306287 polkitd[2082]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:55:51.306401 polkitd[2082]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:55:51.310589 polkitd[2082]: Finished loading, compiling and executing 2 rules Jan 23 23:55:51.313443 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:55:51.313744 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:55:51.317098 polkitd[2082]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:55:51.376448 systemd-hostnamed[2023]: Hostname set to (transient) Jan 23 23:55:51.376641 systemd-resolved[1919]: System hostname changed to 'ip-172-31-27-234'. Jan 23 23:55:51.461490 containerd[2027]: time="2026-01-23T23:55:51.455504770Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:55:51.538403 ntpd[1991]: bind(24) AF_INET6 fe80::493:9aff:febb:3e1d%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:51.542427 ntpd[1991]: 23 Jan 23:55:51 ntpd[1991]: bind(24) AF_INET6 fe80::493:9aff:febb:3e1d%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:51.542427 ntpd[1991]: 23 Jan 23:55:51 ntpd[1991]: unable to create socket on eth0 (6) for fe80::493:9aff:febb:3e1d%2#123 Jan 23 23:55:51.542427 ntpd[1991]: 23 Jan 23:55:51 ntpd[1991]: failed to init interface for address fe80::493:9aff:febb:3e1d%2 Jan 23 23:55:51.538489 ntpd[1991]: unable to create socket on eth0 (6) for fe80::493:9aff:febb:3e1d%2#123 Jan 23 23:55:51.538523 ntpd[1991]: failed to init interface for address fe80::493:9aff:febb:3e1d%2 Jan 23 23:55:51.595492 coreos-metadata[2086]: Jan 23 23:55:51.594 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:55:51.604655 coreos-metadata[2086]: Jan 23 23:55:51.604 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:55:51.604787 coreos-metadata[2086]: Jan 23 23:55:51.604 INFO Fetch successful Jan 23 23:55:51.604787 coreos-metadata[2086]: Jan 23 23:55:51.604 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:55:51.609673 coreos-metadata[2086]: Jan 23 23:55:51.607 INFO Fetch successful Jan 23 23:55:51.613131 unknown[2086]: wrote ssh authorized keys file for user: core Jan 23 23:55:51.621100 locksmithd[2052]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:55:51.665016 containerd[2027]: time="2026-01-23T23:55:51.664953923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:51.673434 containerd[2027]: time="2026-01-23T23:55:51.673363775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:51.675937 containerd[2027]: time="2026-01-23T23:55:51.675682367Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:55:51.675937 containerd[2027]: time="2026-01-23T23:55:51.675742835Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:55:51.677492 containerd[2027]: time="2026-01-23T23:55:51.677431547Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:55:51.677758 containerd[2027]: time="2026-01-23T23:55:51.677625203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:51.677977 containerd[2027]: time="2026-01-23T23:55:51.677940443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:51.678088 containerd[2027]: time="2026-01-23T23:55:51.678058751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:51.679691 containerd[2027]: time="2026-01-23T23:55:51.679638323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:51.680585 containerd[2027]: time="2026-01-23T23:55:51.680541131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:51.680947 containerd[2027]: time="2026-01-23T23:55:51.680909147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:51.682825 containerd[2027]: time="2026-01-23T23:55:51.681230327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:51.685498 containerd[2027]: time="2026-01-23T23:55:51.681452519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:51.685670 containerd[2027]: time="2026-01-23T23:55:51.684826739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:51.686066 containerd[2027]: time="2026-01-23T23:55:51.686023631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:51.689541 containerd[2027]: time="2026-01-23T23:55:51.687820427Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:55:51.689541 containerd[2027]: time="2026-01-23T23:55:51.688089275Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:55:51.689541 containerd[2027]: time="2026-01-23T23:55:51.688226543Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:55:51.703035 update-ssh-keys[2176]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.721347899Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.721598999Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.721641647Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.721676363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.721710095Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.721969187Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.722833163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.723044639Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.723077363Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.723107795Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.723138731Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.723168659Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.723198383Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.723626 containerd[2027]: time="2026-01-23T23:55:51.723228563Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.714538 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:55:51.724641 containerd[2027]: time="2026-01-23T23:55:51.723259871Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.724641 containerd[2027]: time="2026-01-23T23:55:51.723295439Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.724641 containerd[2027]: time="2026-01-23T23:55:51.723324071Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.724641 containerd[2027]: time="2026-01-23T23:55:51.723352523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:55:51.724641 containerd[2027]: time="2026-01-23T23:55:51.723391379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.724641 containerd[2027]: time="2026-01-23T23:55:51.723426143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.723456095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729389795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729430547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729501227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729548627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729581471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729613127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729649991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729680255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729712607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729750767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729786047Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729833123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729863243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.731864 containerd[2027]: time="2026-01-23T23:55:51.729891863Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734318867Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734391731Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734420183Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734448911Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734493899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734530271Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734555747Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:55:51.739517 containerd[2027]: time="2026-01-23T23:55:51.734583263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:55:51.735536 systemd[1]: Finished sshkeys.service. Jan 23 23:55:51.740062 containerd[2027]: time="2026-01-23T23:55:51.735217211Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:55:51.740062 containerd[2027]: time="2026-01-23T23:55:51.735323087Z" level=info msg="Connect containerd service" Jan 23 23:55:51.740062 containerd[2027]: time="2026-01-23T23:55:51.735387215Z" level=info msg="using legacy CRI server" Jan 23 23:55:51.740062 containerd[2027]: time="2026-01-23T23:55:51.735404987Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:55:51.740062 containerd[2027]: time="2026-01-23T23:55:51.735578903Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:55:51.750369 containerd[2027]: time="2026-01-23T23:55:51.746930339Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:55:51.752507 containerd[2027]: time="2026-01-23T23:55:51.750879803Z" level=info msg="Start subscribing containerd event" Jan 23 23:55:51.752507 containerd[2027]: time="2026-01-23T23:55:51.750968963Z" level=info msg="Start recovering state" Jan 23 23:55:51.752507 containerd[2027]: time="2026-01-23T23:55:51.751096943Z" level=info msg="Start event monitor" Jan 23 23:55:51.752507 containerd[2027]: time="2026-01-23T23:55:51.751127711Z" level=info msg="Start snapshots syncer" Jan 23 23:55:51.752507 containerd[2027]: time="2026-01-23T23:55:51.751150439Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:55:51.752507 containerd[2027]: time="2026-01-23T23:55:51.751171847Z" level=info msg="Start streaming server" Jan 23 23:55:51.756552 containerd[2027]: time="2026-01-23T23:55:51.756500327Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:55:51.758500 containerd[2027]: time="2026-01-23T23:55:51.756769475Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:55:51.758500 containerd[2027]: time="2026-01-23T23:55:51.756877295Z" level=info msg="containerd successfully booted in 0.303117s" Jan 23 23:55:51.762406 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:55:52.138670 systemd-networkd[1917]: eth0: Gained IPv6LL Jan 23 23:55:52.149525 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:55:52.153293 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:55:52.164983 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:55:52.180941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:52.188090 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:55:52.312766 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:55:52.347490 amazon-ssm-agent[2193]: Initializing new seelog logger Jan 23 23:55:52.347490 amazon-ssm-agent[2193]: New Seelog Logger Creation Complete Jan 23 23:55:52.347490 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.347490 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.348406 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 processing appconfig overrides Jan 23 23:55:52.351019 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.351019 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.351019 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 processing appconfig overrides Jan 23 23:55:52.351019 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.351019 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.351019 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 processing appconfig overrides Jan 23 23:55:52.351019 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO Proxy environment variables: Jan 23 23:55:52.354509 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.354509 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:52.354509 amazon-ssm-agent[2193]: 2026/01/23 23:55:52 processing appconfig overrides Jan 23 23:55:52.414832 tar[2002]: linux-arm64/README.md Jan 23 23:55:52.452539 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO https_proxy: Jan 23 23:55:52.454733 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:55:52.558965 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO http_proxy: Jan 23 23:55:52.586003 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:55:52.658079 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO no_proxy: Jan 23 23:55:52.755408 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:55:52.853600 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:55:52.952453 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO Agent will take identity from EC2 Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [Registrar] Starting registrar module Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:55:52.963246 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:55:52.979165 sshd_keygen[2030]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:55:53.020300 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:55:53.036050 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:55:53.045209 systemd[1]: Started sshd@0-172.31.27.234:22-4.153.228.146:47274.service - OpenSSH per-connection server daemon (4.153.228.146:47274). Jan 23 23:55:53.051503 amazon-ssm-agent[2193]: 2026-01-23 23:55:52 INFO [CredentialRefresher] Next credential rotation will be in 31.783319970233332 minutes Jan 23 23:55:53.053838 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:55:53.054188 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:55:53.066994 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:55:53.110446 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:55:53.125428 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:55:53.132006 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:55:53.136595 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:55:53.575632 sshd[2225]: Accepted publickey for core from 4.153.228.146 port 47274 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:53.578668 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:53.597817 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:55:53.609116 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:55:53.616539 systemd-logind[1997]: New session 1 of user core. Jan 23 23:55:53.645563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:55:53.659100 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:55:53.681935 (systemd)[2236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:55:53.910235 systemd[2236]: Queued start job for default target default.target. Jan 23 23:55:53.919017 systemd[2236]: Created slice app.slice - User Application Slice. Jan 23 23:55:53.919081 systemd[2236]: Reached target paths.target - Paths. Jan 23 23:55:53.919114 systemd[2236]: Reached target timers.target - Timers. Jan 23 23:55:53.921649 systemd[2236]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:55:53.955248 systemd[2236]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:55:53.955721 systemd[2236]: Reached target sockets.target - Sockets. Jan 23 23:55:53.955862 systemd[2236]: Reached target basic.target - Basic System. Jan 23 23:55:53.956110 systemd[2236]: Reached target default.target - Main User Target. Jan 23 23:55:53.956316 systemd[2236]: Startup finished in 262ms. Jan 23 23:55:53.956636 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:55:53.964801 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:55:53.999749 amazon-ssm-agent[2193]: 2026-01-23 23:55:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:55:54.100991 amazon-ssm-agent[2193]: 2026-01-23 23:55:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2246) started Jan 23 23:55:54.202983 amazon-ssm-agent[2193]: 2026-01-23 23:55:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:55:54.360963 systemd[1]: Started sshd@1-172.31.27.234:22-4.153.228.146:47284.service - OpenSSH per-connection server daemon (4.153.228.146:47284). Jan 23 23:55:54.538036 ntpd[1991]: Listen normally on 7 eth0 [fe80::493:9aff:febb:3e1d%2]:123 Jan 23 23:55:54.539371 ntpd[1991]: 23 Jan 23:55:54 ntpd[1991]: Listen normally on 7 eth0 [fe80::493:9aff:febb:3e1d%2]:123 Jan 23 23:55:54.606738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:54.614178 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:55:54.616958 systemd[1]: Startup finished in 1.165s (kernel) + 8.704s (initrd) + 9.151s (userspace) = 19.021s. Jan 23 23:55:54.624025 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:54.856848 sshd[2257]: Accepted publickey for core from 4.153.228.146 port 47284 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:54.860206 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:54.869867 systemd-logind[1997]: New session 2 of user core. Jan 23 23:55:54.874751 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:55:55.211561 sshd[2257]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:55.220306 systemd[1]: sshd@1-172.31.27.234:22-4.153.228.146:47284.service: Deactivated successfully. Jan 23 23:55:55.223293 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:55:55.225808 systemd-logind[1997]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:55:55.229773 systemd-logind[1997]: Removed session 2. Jan 23 23:55:55.320967 systemd[1]: Started sshd@2-172.31.27.234:22-4.153.228.146:58896.service - OpenSSH per-connection server daemon (4.153.228.146:58896). Jan 23 23:55:55.807341 kubelet[2264]: E0123 23:55:55.807222 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:55.812118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:55.812870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:55.814626 systemd[1]: kubelet.service: Consumed 1.387s CPU time. Jan 23 23:55:55.856178 sshd[2279]: Accepted publickey for core from 4.153.228.146 port 58896 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:55.858793 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:55.866652 systemd-logind[1997]: New session 3 of user core. Jan 23 23:55:55.878721 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:55:56.228798 sshd[2279]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:56.235695 systemd[1]: sshd@2-172.31.27.234:22-4.153.228.146:58896.service: Deactivated successfully. Jan 23 23:55:56.239277 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:55:56.241010 systemd-logind[1997]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:55:56.242649 systemd-logind[1997]: Removed session 3. Jan 23 23:55:56.322920 systemd[1]: Started sshd@3-172.31.27.234:22-4.153.228.146:58902.service - OpenSSH per-connection server daemon (4.153.228.146:58902). Jan 23 23:55:56.809125 sshd[2288]: Accepted publickey for core from 4.153.228.146 port 58902 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:56.811734 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:56.819225 systemd-logind[1997]: New session 4 of user core. Jan 23 23:55:56.831730 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:55:57.162909 sshd[2288]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:57.169585 systemd[1]: sshd@3-172.31.27.234:22-4.153.228.146:58902.service: Deactivated successfully. Jan 23 23:55:57.173141 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:55:57.174375 systemd-logind[1997]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:55:57.176036 systemd-logind[1997]: Removed session 4. Jan 23 23:55:57.266590 systemd[1]: Started sshd@4-172.31.27.234:22-4.153.228.146:58904.service - OpenSSH per-connection server daemon (4.153.228.146:58904). Jan 23 23:55:57.179693 systemd-resolved[1919]: Clock change detected. Flushing caches. Jan 23 23:55:57.187157 systemd-journald[1570]: Time jumped backwards, rotating. Jan 23 23:55:57.448078 sshd[2295]: Accepted publickey for core from 4.153.228.146 port 58904 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:57.450848 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:57.458744 systemd-logind[1997]: New session 5 of user core. Jan 23 23:55:57.470489 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:55:57.760543 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:55:57.761162 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:57.777725 sudo[2299]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:57.861642 sshd[2295]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:57.868117 systemd[1]: sshd@4-172.31.27.234:22-4.153.228.146:58904.service: Deactivated successfully. Jan 23 23:55:57.871778 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:55:57.873553 systemd-logind[1997]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:55:57.876040 systemd-logind[1997]: Removed session 5. Jan 23 23:55:57.957609 systemd[1]: Started sshd@5-172.31.27.234:22-4.153.228.146:58908.service - OpenSSH per-connection server daemon (4.153.228.146:58908). Jan 23 23:55:58.506168 sshd[2304]: Accepted publickey for core from 4.153.228.146 port 58908 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:58.508841 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:58.515963 systemd-logind[1997]: New session 6 of user core. Jan 23 23:55:58.528732 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:55:58.807883 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:55:58.808578 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:58.815060 sudo[2308]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:58.824971 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:55:58.825637 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:58.854702 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:58.857176 auditctl[2311]: No rules Jan 23 23:55:58.858987 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:55:58.859461 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:58.865580 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:58.918536 augenrules[2329]: No rules Jan 23 23:55:58.921259 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:58.923669 sudo[2307]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:59.009543 sshd[2304]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:59.015789 systemd-logind[1997]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:55:59.016767 systemd[1]: sshd@5-172.31.27.234:22-4.153.228.146:58908.service: Deactivated successfully. Jan 23 23:55:59.020100 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:55:59.021700 systemd-logind[1997]: Removed session 6. Jan 23 23:55:59.110714 systemd[1]: Started sshd@6-172.31.27.234:22-4.153.228.146:58910.service - OpenSSH per-connection server daemon (4.153.228.146:58910). Jan 23 23:55:59.636442 sshd[2337]: Accepted publickey for core from 4.153.228.146 port 58910 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:59.639015 sshd[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:59.648406 systemd-logind[1997]: New session 7 of user core. Jan 23 23:55:59.657695 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:55:59.932362 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:55:59.932969 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:56:00.434758 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:56:00.436970 (dockerd)[2356]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:56:00.847636 dockerd[2356]: time="2026-01-23T23:56:00.847457786Z" level=info msg="Starting up" Jan 23 23:56:00.974111 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1268114561-merged.mount: Deactivated successfully. Jan 23 23:56:01.016176 dockerd[2356]: time="2026-01-23T23:56:01.016072043Z" level=info msg="Loading containers: start." Jan 23 23:56:01.181254 kernel: Initializing XFRM netlink socket Jan 23 23:56:01.214404 (udev-worker)[2378]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:01.301432 systemd-networkd[1917]: docker0: Link UP Jan 23 23:56:01.337713 dockerd[2356]: time="2026-01-23T23:56:01.337653492Z" level=info msg="Loading containers: done." Jan 23 23:56:01.362449 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3712225778-merged.mount: Deactivated successfully. Jan 23 23:56:01.370827 dockerd[2356]: time="2026-01-23T23:56:01.370746708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:56:01.371704 dockerd[2356]: time="2026-01-23T23:56:01.371095104Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:56:01.371704 dockerd[2356]: time="2026-01-23T23:56:01.371337624Z" level=info msg="Daemon has completed initialization" Jan 23 23:56:01.445732 dockerd[2356]: time="2026-01-23T23:56:01.444254053Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:56:01.444511 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:56:02.778620 containerd[2027]: time="2026-01-23T23:56:02.778077387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 23:56:03.432052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202669075.mount: Deactivated successfully. Jan 23 23:56:04.949275 containerd[2027]: time="2026-01-23T23:56:04.949093458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:04.951382 containerd[2027]: time="2026-01-23T23:56:04.951314214Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 23 23:56:04.953526 containerd[2027]: time="2026-01-23T23:56:04.953471262Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:04.960246 containerd[2027]: time="2026-01-23T23:56:04.959591130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:04.962388 containerd[2027]: time="2026-01-23T23:56:04.961980342Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.183845211s" Jan 23 23:56:04.962388 containerd[2027]: time="2026-01-23T23:56:04.962041542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 23:56:04.964777 containerd[2027]: time="2026-01-23T23:56:04.964621146Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 23:56:05.664171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:56:05.671579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:06.094689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:06.108537 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:06.213335 kubelet[2564]: E0123 23:56:06.213253 2564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:06.222469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:06.223795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:06.666977 containerd[2027]: time="2026-01-23T23:56:06.666897379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:06.670473 containerd[2027]: time="2026-01-23T23:56:06.669896623Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 23 23:56:06.670473 containerd[2027]: time="2026-01-23T23:56:06.670134283Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:06.679846 containerd[2027]: time="2026-01-23T23:56:06.679786579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:06.682306 containerd[2027]: time="2026-01-23T23:56:06.682255879Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.717273833s" Jan 23 23:56:06.682843 containerd[2027]: time="2026-01-23T23:56:06.682462783Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 23:56:06.683351 containerd[2027]: time="2026-01-23T23:56:06.683109307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 23:56:07.866248 containerd[2027]: time="2026-01-23T23:56:07.864847713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:07.867003 containerd[2027]: time="2026-01-23T23:56:07.866949417Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 23 23:56:07.868400 containerd[2027]: time="2026-01-23T23:56:07.867481893Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:07.873441 containerd[2027]: time="2026-01-23T23:56:07.873377721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:07.875980 containerd[2027]: time="2026-01-23T23:56:07.875931885Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.192768734s" Jan 23 23:56:07.876143 containerd[2027]: time="2026-01-23T23:56:07.876114081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 23:56:07.877108 containerd[2027]: time="2026-01-23T23:56:07.877050549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 23:56:09.092361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287368871.mount: Deactivated successfully. Jan 23 23:56:09.673818 containerd[2027]: time="2026-01-23T23:56:09.672426874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:09.673818 containerd[2027]: time="2026-01-23T23:56:09.673769086Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 23:56:09.674659 containerd[2027]: time="2026-01-23T23:56:09.674612254Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:09.678002 containerd[2027]: time="2026-01-23T23:56:09.677949934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:09.679536 containerd[2027]: time="2026-01-23T23:56:09.679490614Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.802380485s" Jan 23 23:56:09.679723 containerd[2027]: time="2026-01-23T23:56:09.679691398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 23:56:09.680458 containerd[2027]: time="2026-01-23T23:56:09.680412514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 23:56:10.207984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651196949.mount: Deactivated successfully. Jan 23 23:56:11.450433 containerd[2027]: time="2026-01-23T23:56:11.450334258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:11.452896 containerd[2027]: time="2026-01-23T23:56:11.452657362Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 23 23:56:11.455378 containerd[2027]: time="2026-01-23T23:56:11.455316574Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:11.466553 containerd[2027]: time="2026-01-23T23:56:11.466466626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:11.469160 containerd[2027]: time="2026-01-23T23:56:11.469086562Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.788615704s" Jan 23 23:56:11.469476 containerd[2027]: time="2026-01-23T23:56:11.469314202Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 23:56:11.470589 containerd[2027]: time="2026-01-23T23:56:11.470255014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:56:11.987964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137387993.mount: Deactivated successfully. Jan 23 23:56:12.002260 containerd[2027]: time="2026-01-23T23:56:12.002175165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:12.004155 containerd[2027]: time="2026-01-23T23:56:12.004083633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:56:12.006793 containerd[2027]: time="2026-01-23T23:56:12.006721197Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:12.011846 containerd[2027]: time="2026-01-23T23:56:12.011780577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:12.013688 containerd[2027]: time="2026-01-23T23:56:12.013521717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 543.207819ms" Jan 23 23:56:12.013688 containerd[2027]: time="2026-01-23T23:56:12.013571493Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:56:12.015321 containerd[2027]: time="2026-01-23T23:56:12.014926881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 23:56:12.626086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815421087.mount: Deactivated successfully. Jan 23 23:56:15.198407 containerd[2027]: time="2026-01-23T23:56:15.198347425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:15.201252 containerd[2027]: time="2026-01-23T23:56:15.201166537Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 23 23:56:15.202914 containerd[2027]: time="2026-01-23T23:56:15.202845517Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:15.209386 containerd[2027]: time="2026-01-23T23:56:15.209310253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:15.211942 containerd[2027]: time="2026-01-23T23:56:15.211892233Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.196913584s" Jan 23 23:56:15.212245 containerd[2027]: time="2026-01-23T23:56:15.212078905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 23:56:16.413345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:56:16.424714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:16.781743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:16.785579 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:56:16.861239 kubelet[2724]: E0123 23:56:16.859159 2724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:56:16.863846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:56:16.864167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:21.054827 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:56:23.423245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:23.433748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:23.495576 systemd[1]: Reloading requested from client PID 2741 ('systemctl') (unit session-7.scope)... Jan 23 23:56:23.495770 systemd[1]: Reloading... Jan 23 23:56:23.726279 zram_generator::config[2786]: No configuration found. Jan 23 23:56:23.965687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:24.138477 systemd[1]: Reloading finished in 641 ms. Jan 23 23:56:24.225416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:24.243763 (kubelet)[2834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:24.246768 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:24.247576 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:56:24.247950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:24.256823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:24.576305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:24.590048 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:24.662641 kubelet[2847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:24.663651 kubelet[2847]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:24.663883 kubelet[2847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:24.664127 kubelet[2847]: I0123 23:56:24.664074 2847 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:26.871532 kubelet[2847]: I0123 23:56:26.871485 2847 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:56:26.873233 kubelet[2847]: I0123 23:56:26.872081 2847 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:26.873233 kubelet[2847]: I0123 23:56:26.872498 2847 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:56:26.905471 kubelet[2847]: E0123 23:56:26.905404 2847 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.27.234:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:56:26.907615 kubelet[2847]: I0123 23:56:26.907577 2847 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:26.922253 kubelet[2847]: E0123 23:56:26.922167 2847 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:26.922501 kubelet[2847]: I0123 23:56:26.922476 2847 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:26.929157 kubelet[2847]: I0123 23:56:26.929120 2847 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:26.930010 kubelet[2847]: I0123 23:56:26.929970 2847 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:26.930434 kubelet[2847]: I0123 23:56:26.930118 2847 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-234","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:56:26.930768 kubelet[2847]: I0123 23:56:26.930745 2847 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:26.930865 kubelet[2847]: I0123 23:56:26.930848 2847 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:56:26.931304 kubelet[2847]: I0123 23:56:26.931281 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:26.937070 kubelet[2847]: I0123 23:56:26.937037 2847 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:56:26.937281 kubelet[2847]: I0123 23:56:26.937256 2847 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:26.939388 kubelet[2847]: I0123 23:56:26.939362 2847 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:56:26.941977 kubelet[2847]: I0123 23:56:26.941592 2847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:26.948257 kubelet[2847]: E0123 23:56:26.947567 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-234&limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:56:26.949523 kubelet[2847]: I0123 23:56:26.949480 2847 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:26.950978 kubelet[2847]: I0123 23:56:26.950947 2847 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:56:26.951367 kubelet[2847]: W0123 23:56:26.951345 2847 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:56:26.956065 kubelet[2847]: E0123 23:56:26.955982 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:56:26.962903 kubelet[2847]: I0123 23:56:26.962853 2847 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:26.963035 kubelet[2847]: I0123 23:56:26.962944 2847 server.go:1289] "Started kubelet" Jan 23 23:56:26.966558 kubelet[2847]: I0123 23:56:26.966512 2847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:26.972984 kubelet[2847]: E0123 23:56:26.970833 2847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.234:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.234:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-234.188d81796212a74b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-234,UID:ip-172-31-27-234,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-234,},FirstTimestamp:2026-01-23 23:56:26.962888523 +0000 UTC m=+2.365488636,LastTimestamp:2026-01-23 23:56:26.962888523 +0000 UTC m=+2.365488636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-234,}" Jan 23 23:56:26.978078 kubelet[2847]: I0123 23:56:26.977587 2847 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:26.978584 kubelet[2847]: I0123 23:56:26.978542 2847 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:26.978912 kubelet[2847]: E0123 23:56:26.978866 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-234\" not found" Jan 23 23:56:26.979412 kubelet[2847]: I0123 23:56:26.979374 2847 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:26.979511 kubelet[2847]: I0123 23:56:26.979488 2847 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:26.981326 kubelet[2847]: I0123 23:56:26.980482 2847 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:56:26.986969 kubelet[2847]: I0123 23:56:26.986891 2847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:26.987485 kubelet[2847]: I0123 23:56:26.987456 2847 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:26.987964 kubelet[2847]: I0123 23:56:26.987932 2847 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:26.988934 kubelet[2847]: E0123 23:56:26.988891 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:56:26.989284 kubelet[2847]: E0123 23:56:26.989195 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-234?timeout=10s\": dial tcp 172.31.27.234:6443: connect: connection refused" interval="200ms" Jan 23 23:56:26.989768 kubelet[2847]: I0123 23:56:26.989734 2847 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:56:26.990052 kubelet[2847]: I0123 23:56:26.990022 2847 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:26.992619 kubelet[2847]: E0123 23:56:26.992576 2847 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:56:26.993838 kubelet[2847]: I0123 23:56:26.993804 2847 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:56:27.015493 kubelet[2847]: I0123 23:56:27.015428 2847 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:27.019523 kubelet[2847]: I0123 23:56:27.019469 2847 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:27.019726 kubelet[2847]: I0123 23:56:27.019708 2847 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:56:27.020247 kubelet[2847]: I0123 23:56:27.020079 2847 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:27.020247 kubelet[2847]: I0123 23:56:27.020102 2847 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:56:27.020581 kubelet[2847]: E0123 23:56:27.020181 2847 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:27.024268 kubelet[2847]: E0123 23:56:27.024178 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:56:27.037012 kubelet[2847]: I0123 23:56:27.036979 2847 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:27.037591 kubelet[2847]: I0123 23:56:27.037191 2847 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:27.037591 kubelet[2847]: I0123 23:56:27.037274 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:27.041913 kubelet[2847]: I0123 23:56:27.041525 2847 policy_none.go:49] "None policy: Start" Jan 23 23:56:27.041913 kubelet[2847]: I0123 23:56:27.041559 2847 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:27.041913 kubelet[2847]: I0123 23:56:27.041582 2847 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:27.054620 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:56:27.069918 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:56:27.076570 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:56:27.079455 kubelet[2847]: E0123 23:56:27.079030 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-234\" not found" Jan 23 23:56:27.086827 kubelet[2847]: E0123 23:56:27.086792 2847 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:56:27.088039 kubelet[2847]: I0123 23:56:27.088010 2847 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:27.088708 kubelet[2847]: I0123 23:56:27.088362 2847 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:27.090236 kubelet[2847]: I0123 23:56:27.089863 2847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:27.092926 kubelet[2847]: E0123 23:56:27.092495 2847 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:27.092926 kubelet[2847]: E0123 23:56:27.092555 2847 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-234\" not found" Jan 23 23:56:27.156910 systemd[1]: Created slice kubepods-burstable-pod71d1a5b91bc7c24487a1387c244ff526.slice - libcontainer container kubepods-burstable-pod71d1a5b91bc7c24487a1387c244ff526.slice. Jan 23 23:56:27.176778 kubelet[2847]: E0123 23:56:27.176722 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:27.181018 kubelet[2847]: I0123 23:56:27.179846 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d1a5b91bc7c24487a1387c244ff526-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-234\" (UID: \"71d1a5b91bc7c24487a1387c244ff526\") " pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:27.181018 kubelet[2847]: I0123 23:56:27.179922 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d1a5b91bc7c24487a1387c244ff526-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-234\" (UID: \"71d1a5b91bc7c24487a1387c244ff526\") " pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:27.181018 kubelet[2847]: I0123 23:56:27.179964 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:27.181018 kubelet[2847]: I0123 23:56:27.180001 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:27.181018 kubelet[2847]: I0123 23:56:27.180040 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:27.181400 kubelet[2847]: I0123 23:56:27.180077 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/583f7b1cd3f07bfde2198ce1de7970a7-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-234\" (UID: \"583f7b1cd3f07bfde2198ce1de7970a7\") " pod="kube-system/kube-scheduler-ip-172-31-27-234" Jan 23 23:56:27.181400 kubelet[2847]: I0123 23:56:27.180112 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d1a5b91bc7c24487a1387c244ff526-ca-certs\") pod \"kube-apiserver-ip-172-31-27-234\" (UID: \"71d1a5b91bc7c24487a1387c244ff526\") " pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:27.181400 kubelet[2847]: I0123 23:56:27.180146 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:27.181400 kubelet[2847]: I0123 23:56:27.180180 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:27.181017 systemd[1]: Created slice kubepods-burstable-pod80c223e0b938044f90caeb17a1e7e2ee.slice - libcontainer container kubepods-burstable-pod80c223e0b938044f90caeb17a1e7e2ee.slice. Jan 23 23:56:27.186148 kubelet[2847]: E0123 23:56:27.186112 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:27.190182 systemd[1]: Created slice kubepods-burstable-pod583f7b1cd3f07bfde2198ce1de7970a7.slice - libcontainer container kubepods-burstable-pod583f7b1cd3f07bfde2198ce1de7970a7.slice. Jan 23 23:56:27.190765 kubelet[2847]: E0123 23:56:27.190722 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-234?timeout=10s\": dial tcp 172.31.27.234:6443: connect: connection refused" interval="400ms" Jan 23 23:56:27.196999 kubelet[2847]: I0123 23:56:27.196956 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-234" Jan 23 23:56:27.197954 kubelet[2847]: E0123 23:56:27.197550 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:27.198189 kubelet[2847]: E0123 23:56:27.197920 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.234:6443/api/v1/nodes\": dial tcp 172.31.27.234:6443: connect: connection refused" node="ip-172-31-27-234" Jan 23 23:56:27.401172 kubelet[2847]: I0123 23:56:27.400708 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-234" Jan 23 23:56:27.401172 kubelet[2847]: E0123 23:56:27.401118 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.234:6443/api/v1/nodes\": dial tcp 172.31.27.234:6443: connect: connection refused" node="ip-172-31-27-234" Jan 23 23:56:27.478774 containerd[2027]: time="2026-01-23T23:56:27.478704902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-234,Uid:71d1a5b91bc7c24487a1387c244ff526,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:27.487799 containerd[2027]: time="2026-01-23T23:56:27.487670654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-234,Uid:80c223e0b938044f90caeb17a1e7e2ee,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:27.500194 containerd[2027]: time="2026-01-23T23:56:27.499834670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-234,Uid:583f7b1cd3f07bfde2198ce1de7970a7,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:27.592026 kubelet[2847]: E0123 23:56:27.591970 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-234?timeout=10s\": dial tcp 172.31.27.234:6443: connect: connection refused" interval="800ms" Jan 23 23:56:27.806253 kubelet[2847]: I0123 23:56:27.805517 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-234" Jan 23 23:56:27.806253 kubelet[2847]: E0123 23:56:27.806041 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.234:6443/api/v1/nodes\": dial tcp 172.31.27.234:6443: connect: connection refused" node="ip-172-31-27-234" Jan 23 23:56:27.815580 kubelet[2847]: E0123 23:56:27.815534 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:56:27.994173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60187647.mount: Deactivated successfully. Jan 23 23:56:28.008638 containerd[2027]: time="2026-01-23T23:56:28.008549377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:28.016048 containerd[2027]: time="2026-01-23T23:56:28.015981541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:56:28.017826 containerd[2027]: time="2026-01-23T23:56:28.017762953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:28.020820 containerd[2027]: time="2026-01-23T23:56:28.020755609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:28.024168 containerd[2027]: time="2026-01-23T23:56:28.024113209Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:28.027692 containerd[2027]: time="2026-01-23T23:56:28.026026621Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:28.027692 containerd[2027]: time="2026-01-23T23:56:28.027594505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:28.031242 containerd[2027]: time="2026-01-23T23:56:28.031159657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:28.033730 containerd[2027]: time="2026-01-23T23:56:28.033680785Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.839539ms" Jan 23 23:56:28.041713 containerd[2027]: time="2026-01-23T23:56:28.041651581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.713123ms" Jan 23 23:56:28.043469 containerd[2027]: time="2026-01-23T23:56:28.043378477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.599979ms" Jan 23 23:56:28.257953 containerd[2027]: time="2026-01-23T23:56:28.257626538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:28.258329 containerd[2027]: time="2026-01-23T23:56:28.258148850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:28.258699 containerd[2027]: time="2026-01-23T23:56:28.258562538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:28.260688 containerd[2027]: time="2026-01-23T23:56:28.260528414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:28.265765 containerd[2027]: time="2026-01-23T23:56:28.265452950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:28.265765 containerd[2027]: time="2026-01-23T23:56:28.265633106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:28.269204 containerd[2027]: time="2026-01-23T23:56:28.265742894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:28.269204 containerd[2027]: time="2026-01-23T23:56:28.268180322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:28.274891 containerd[2027]: time="2026-01-23T23:56:28.274395506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:28.274891 containerd[2027]: time="2026-01-23T23:56:28.274527482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:28.274891 containerd[2027]: time="2026-01-23T23:56:28.274556306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:28.274891 containerd[2027]: time="2026-01-23T23:56:28.274731998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:28.313540 systemd[1]: Started cri-containerd-12bfad05ce3db81d270313b691fd3521df6e0d10f2ccc9d99d5c35e285c4c9f4.scope - libcontainer container 12bfad05ce3db81d270313b691fd3521df6e0d10f2ccc9d99d5c35e285c4c9f4. Jan 23 23:56:28.325646 kubelet[2847]: E0123 23:56:28.324861 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:56:28.328577 systemd[1]: Started cri-containerd-559d1fdcedee2aa535d74480967d11d8b93a716885a49f3ee988be825f325d6e.scope - libcontainer container 559d1fdcedee2aa535d74480967d11d8b93a716885a49f3ee988be825f325d6e. Jan 23 23:56:28.341558 systemd[1]: Started cri-containerd-60ede130fc4d1c37e4848937efcad9f591dd492271eb2be9714f15af4da9fdd5.scope - libcontainer container 60ede130fc4d1c37e4848937efcad9f591dd492271eb2be9714f15af4da9fdd5. Jan 23 23:56:28.352811 kubelet[2847]: E0123 23:56:28.352629 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-234&limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:56:28.393309 kubelet[2847]: E0123 23:56:28.393190 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-234?timeout=10s\": dial tcp 172.31.27.234:6443: connect: connection refused" interval="1.6s" Jan 23 23:56:28.443183 containerd[2027]: time="2026-01-23T23:56:28.443131863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-234,Uid:80c223e0b938044f90caeb17a1e7e2ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"12bfad05ce3db81d270313b691fd3521df6e0d10f2ccc9d99d5c35e285c4c9f4\"" Jan 23 23:56:28.459322 containerd[2027]: time="2026-01-23T23:56:28.458983779Z" level=info msg="CreateContainer within sandbox \"12bfad05ce3db81d270313b691fd3521df6e0d10f2ccc9d99d5c35e285c4c9f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:56:28.471513 containerd[2027]: time="2026-01-23T23:56:28.471343443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-234,Uid:71d1a5b91bc7c24487a1387c244ff526,Namespace:kube-system,Attempt:0,} returns sandbox id \"60ede130fc4d1c37e4848937efcad9f591dd492271eb2be9714f15af4da9fdd5\"" Jan 23 23:56:28.484251 containerd[2027]: time="2026-01-23T23:56:28.483994287Z" level=info msg="CreateContainer within sandbox \"60ede130fc4d1c37e4848937efcad9f591dd492271eb2be9714f15af4da9fdd5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:56:28.493161 containerd[2027]: time="2026-01-23T23:56:28.492632763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-234,Uid:583f7b1cd3f07bfde2198ce1de7970a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"559d1fdcedee2aa535d74480967d11d8b93a716885a49f3ee988be825f325d6e\"" Jan 23 23:56:28.501357 containerd[2027]: time="2026-01-23T23:56:28.501159219Z" level=info msg="CreateContainer within sandbox \"559d1fdcedee2aa535d74480967d11d8b93a716885a49f3ee988be825f325d6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:56:28.505081 containerd[2027]: time="2026-01-23T23:56:28.505027131Z" level=info msg="CreateContainer within sandbox \"12bfad05ce3db81d270313b691fd3521df6e0d10f2ccc9d99d5c35e285c4c9f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a\"" Jan 23 23:56:28.507268 containerd[2027]: time="2026-01-23T23:56:28.506352087Z" level=info msg="StartContainer for \"17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a\"" Jan 23 23:56:28.531322 kubelet[2847]: E0123 23:56:28.531140 2847 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:56:28.532909 containerd[2027]: time="2026-01-23T23:56:28.532408275Z" level=info msg="CreateContainer within sandbox \"60ede130fc4d1c37e4848937efcad9f591dd492271eb2be9714f15af4da9fdd5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8637248e5b21a024e74252e042d1073ec5ff070b2afc9c7209f05ebcc9006fc7\"" Jan 23 23:56:28.535579 containerd[2027]: time="2026-01-23T23:56:28.535528719Z" level=info msg="StartContainer for \"8637248e5b21a024e74252e042d1073ec5ff070b2afc9c7209f05ebcc9006fc7\"" Jan 23 23:56:28.539593 containerd[2027]: time="2026-01-23T23:56:28.539513715Z" level=info msg="CreateContainer within sandbox \"559d1fdcedee2aa535d74480967d11d8b93a716885a49f3ee988be825f325d6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d\"" Jan 23 23:56:28.540470 containerd[2027]: time="2026-01-23T23:56:28.540408891Z" level=info msg="StartContainer for \"d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d\"" Jan 23 23:56:28.566591 systemd[1]: Started cri-containerd-17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a.scope - libcontainer container 17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a. Jan 23 23:56:28.601550 systemd[1]: Started cri-containerd-8637248e5b21a024e74252e042d1073ec5ff070b2afc9c7209f05ebcc9006fc7.scope - libcontainer container 8637248e5b21a024e74252e042d1073ec5ff070b2afc9c7209f05ebcc9006fc7. Jan 23 23:56:28.613257 kubelet[2847]: I0123 23:56:28.612536 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-234" Jan 23 23:56:28.613257 kubelet[2847]: E0123 23:56:28.613037 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.234:6443/api/v1/nodes\": dial tcp 172.31.27.234:6443: connect: connection refused" node="ip-172-31-27-234" Jan 23 23:56:28.663628 systemd[1]: Started cri-containerd-d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d.scope - libcontainer container d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d. Jan 23 23:56:28.734836 containerd[2027]: time="2026-01-23T23:56:28.734009752Z" level=info msg="StartContainer for \"8637248e5b21a024e74252e042d1073ec5ff070b2afc9c7209f05ebcc9006fc7\" returns successfully" Jan 23 23:56:28.734836 containerd[2027]: time="2026-01-23T23:56:28.734181988Z" level=info msg="StartContainer for \"17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a\" returns successfully" Jan 23 23:56:28.834992 containerd[2027]: time="2026-01-23T23:56:28.834756449Z" level=info msg="StartContainer for \"d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d\" returns successfully" Jan 23 23:56:29.040004 kubelet[2847]: E0123 23:56:29.039594 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:29.050883 kubelet[2847]: E0123 23:56:29.050430 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:29.053758 kubelet[2847]: E0123 23:56:29.053707 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:30.057275 kubelet[2847]: E0123 23:56:30.056847 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:30.058832 kubelet[2847]: E0123 23:56:30.058802 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:30.219029 kubelet[2847]: I0123 23:56:30.217825 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-234" Jan 23 23:56:30.466330 kubelet[2847]: E0123 23:56:30.465944 2847 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:32.293575 kubelet[2847]: E0123 23:56:32.293511 2847 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-234\" not found" node="ip-172-31-27-234" Jan 23 23:56:32.484866 kubelet[2847]: I0123 23:56:32.484197 2847 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-234" Jan 23 23:56:32.484866 kubelet[2847]: E0123 23:56:32.484274 2847 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-27-234\": node \"ip-172-31-27-234\" not found" Jan 23 23:56:32.579532 kubelet[2847]: I0123 23:56:32.579381 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:32.621710 kubelet[2847]: E0123 23:56:32.621664 2847 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-234\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:32.623245 kubelet[2847]: I0123 23:56:32.621897 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:32.640891 kubelet[2847]: E0123 23:56:32.640844 2847 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-27-234\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:32.641102 kubelet[2847]: I0123 23:56:32.641077 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-234" Jan 23 23:56:32.646303 kubelet[2847]: E0123 23:56:32.646254 2847 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-234\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-234" Jan 23 23:56:32.953882 kubelet[2847]: I0123 23:56:32.953759 2847 apiserver.go:52] "Watching apiserver" Jan 23 23:56:32.980278 kubelet[2847]: I0123 23:56:32.980225 2847 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:35.153880 kubelet[2847]: I0123 23:56:35.153830 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:36.238676 update_engine[1999]: I20260123 23:56:36.238353 1999 update_attempter.cc:509] Updating boot flags... Jan 23 23:56:36.379856 systemd[1]: Reloading requested from client PID 3153 ('systemctl') (unit session-7.scope)... Jan 23 23:56:36.385674 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3148) Jan 23 23:56:36.379881 systemd[1]: Reloading... Jan 23 23:56:36.719269 zram_generator::config[3274]: No configuration found. Jan 23 23:56:36.829297 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3148) Jan 23 23:56:37.086427 kubelet[2847]: I0123 23:56:37.086252 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-234" podStartSLOduration=2.08620237 podStartE2EDuration="2.08620237s" podCreationTimestamp="2026-01-23 23:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:37.078604858 +0000 UTC m=+12.481204971" watchObservedRunningTime="2026-01-23 23:56:37.08620237 +0000 UTC m=+12.488802459" Jan 23 23:56:37.155927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:37.363415 systemd[1]: Reloading finished in 980 ms. Jan 23 23:56:37.443323 kubelet[2847]: I0123 23:56:37.442729 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-234" Jan 23 23:56:37.543981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:37.572933 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:56:37.573663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:37.573835 systemd[1]: kubelet.service: Consumed 3.188s CPU time, 127.5M memory peak, 0B memory swap peak. Jan 23 23:56:37.585926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:37.945577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:37.958831 (kubelet)[3417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:38.081450 kubelet[3417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:38.081450 kubelet[3417]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:38.081450 kubelet[3417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:38.081450 kubelet[3417]: I0123 23:56:38.078475 3417 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:38.103357 kubelet[3417]: I0123 23:56:38.103293 3417 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:56:38.103357 kubelet[3417]: I0123 23:56:38.103342 3417 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:38.105268 kubelet[3417]: I0123 23:56:38.103784 3417 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:56:38.108430 kubelet[3417]: I0123 23:56:38.108372 3417 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:56:38.119802 kubelet[3417]: I0123 23:56:38.119556 3417 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:38.135549 kubelet[3417]: E0123 23:56:38.135474 3417 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:38.135549 kubelet[3417]: I0123 23:56:38.135542 3417 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:38.147975 kubelet[3417]: I0123 23:56:38.147913 3417 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:38.148848 kubelet[3417]: I0123 23:56:38.148766 3417 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:38.149100 kubelet[3417]: I0123 23:56:38.148826 3417 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-234","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:56:38.149271 kubelet[3417]: I0123 23:56:38.149112 3417 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:38.149271 kubelet[3417]: I0123 23:56:38.149132 3417 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:56:38.151326 kubelet[3417]: I0123 23:56:38.149224 3417 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:38.151688 kubelet[3417]: I0123 23:56:38.151647 3417 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:56:38.152366 kubelet[3417]: I0123 23:56:38.152317 3417 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:38.152472 kubelet[3417]: I0123 23:56:38.152394 3417 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:56:38.152472 kubelet[3417]: I0123 23:56:38.152430 3417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:38.177031 kubelet[3417]: I0123 23:56:38.176973 3417 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:38.187554 kubelet[3417]: I0123 23:56:38.187492 3417 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:56:38.190547 sudo[3432]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 23:56:38.192286 sudo[3432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 23:56:38.206764 kubelet[3417]: I0123 23:56:38.206646 3417 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:38.206764 kubelet[3417]: I0123 23:56:38.206719 3417 server.go:1289] "Started kubelet" Jan 23 23:56:38.212901 kubelet[3417]: I0123 23:56:38.212854 3417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:38.221812 kubelet[3417]: I0123 23:56:38.221708 3417 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:38.225179 kubelet[3417]: I0123 23:56:38.222831 3417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:38.225179 kubelet[3417]: I0123 23:56:38.223322 3417 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:38.235249 kubelet[3417]: I0123 23:56:38.234716 3417 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:56:38.235675 kubelet[3417]: I0123 23:56:38.235640 3417 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:38.254529 kubelet[3417]: I0123 23:56:38.250488 3417 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:38.258237 kubelet[3417]: E0123 23:56:38.255068 3417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-234\" not found" Jan 23 23:56:38.259871 kubelet[3417]: I0123 23:56:38.259808 3417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:38.262245 kubelet[3417]: I0123 23:56:38.262156 3417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:38.262447 kubelet[3417]: I0123 23:56:38.262410 3417 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:56:38.262507 kubelet[3417]: I0123 23:56:38.262488 3417 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:38.262600 kubelet[3417]: I0123 23:56:38.262508 3417 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:56:38.262600 kubelet[3417]: E0123 23:56:38.262580 3417 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:38.266256 kubelet[3417]: I0123 23:56:38.265621 3417 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:38.269259 kubelet[3417]: I0123 23:56:38.268071 3417 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:38.282267 kubelet[3417]: I0123 23:56:38.282197 3417 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:56:38.286293 kubelet[3417]: I0123 23:56:38.283592 3417 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:38.286927 kubelet[3417]: E0123 23:56:38.286870 3417 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:56:38.291360 kubelet[3417]: I0123 23:56:38.291325 3417 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:56:38.363973 kubelet[3417]: E0123 23:56:38.363003 3417 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:56:38.448685 kubelet[3417]: I0123 23:56:38.448633 3417 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:38.448685 kubelet[3417]: I0123 23:56:38.448670 3417 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:38.448887 kubelet[3417]: I0123 23:56:38.448709 3417 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:38.452269 kubelet[3417]: I0123 23:56:38.449857 3417 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:56:38.452269 kubelet[3417]: I0123 23:56:38.449886 3417 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:56:38.452269 kubelet[3417]: I0123 23:56:38.449934 3417 policy_none.go:49] "None policy: Start" Jan 23 23:56:38.452269 kubelet[3417]: I0123 23:56:38.449954 3417 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:38.452269 kubelet[3417]: I0123 23:56:38.449982 3417 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:38.452269 kubelet[3417]: I0123 23:56:38.450172 3417 state_mem.go:75] "Updated machine memory state" Jan 23 23:56:38.463660 kubelet[3417]: E0123 23:56:38.463070 3417 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:56:38.463660 kubelet[3417]: I0123 23:56:38.463386 3417 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:38.463660 kubelet[3417]: I0123 23:56:38.463406 3417 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:38.473376 kubelet[3417]: E0123 23:56:38.473313 3417 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:38.475826 kubelet[3417]: I0123 23:56:38.475439 3417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:38.565026 kubelet[3417]: I0123 23:56:38.564953 3417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-234" Jan 23 23:56:38.566291 kubelet[3417]: I0123 23:56:38.565914 3417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:38.566424 kubelet[3417]: I0123 23:56:38.566318 3417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:38.574244 kubelet[3417]: I0123 23:56:38.572272 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d1a5b91bc7c24487a1387c244ff526-ca-certs\") pod \"kube-apiserver-ip-172-31-27-234\" (UID: \"71d1a5b91bc7c24487a1387c244ff526\") " pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:38.574244 kubelet[3417]: I0123 23:56:38.572341 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d1a5b91bc7c24487a1387c244ff526-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-234\" (UID: \"71d1a5b91bc7c24487a1387c244ff526\") " pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:38.574244 kubelet[3417]: I0123 23:56:38.572385 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:38.574244 kubelet[3417]: I0123 23:56:38.572428 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:38.574244 kubelet[3417]: I0123 23:56:38.572467 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:38.574598 kubelet[3417]: I0123 23:56:38.572505 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d1a5b91bc7c24487a1387c244ff526-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-234\" (UID: \"71d1a5b91bc7c24487a1387c244ff526\") " pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:38.574598 kubelet[3417]: I0123 23:56:38.572555 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:38.574598 kubelet[3417]: I0123 23:56:38.572590 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80c223e0b938044f90caeb17a1e7e2ee-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-234\" (UID: \"80c223e0b938044f90caeb17a1e7e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-27-234" Jan 23 23:56:38.574598 kubelet[3417]: I0123 23:56:38.572627 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/583f7b1cd3f07bfde2198ce1de7970a7-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-234\" (UID: \"583f7b1cd3f07bfde2198ce1de7970a7\") " pod="kube-system/kube-scheduler-ip-172-31-27-234" Jan 23 23:56:38.582624 kubelet[3417]: E0123 23:56:38.582569 3417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-234\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-234" Jan 23 23:56:38.585715 kubelet[3417]: E0123 23:56:38.585662 3417 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-234\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-234" Jan 23 23:56:38.596059 kubelet[3417]: I0123 23:56:38.593942 3417 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-234" Jan 23 23:56:38.614973 kubelet[3417]: I0123 23:56:38.614918 3417 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-27-234" Jan 23 23:56:38.615120 kubelet[3417]: I0123 23:56:38.615035 3417 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-234" Jan 23 23:56:39.156008 kubelet[3417]: I0123 23:56:39.155951 3417 apiserver.go:52] "Watching apiserver" Jan 23 23:56:39.167728 sudo[3432]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:39.168184 kubelet[3417]: I0123 23:56:39.168131 3417 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:39.444979 kubelet[3417]: I0123 23:56:39.444663 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-234" podStartSLOduration=2.444640513 podStartE2EDuration="2.444640513s" podCreationTimestamp="2026-01-23 23:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:39.425091661 +0000 UTC m=+1.451908088" watchObservedRunningTime="2026-01-23 23:56:39.444640513 +0000 UTC m=+1.471456892" Jan 23 23:56:39.467511 kubelet[3417]: I0123 23:56:39.466612 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-234" podStartSLOduration=1.46659287 podStartE2EDuration="1.46659287s" podCreationTimestamp="2026-01-23 23:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:39.446057221 +0000 UTC m=+1.472873612" watchObservedRunningTime="2026-01-23 23:56:39.46659287 +0000 UTC m=+1.493409273" Jan 23 23:56:40.956866 kubelet[3417]: I0123 23:56:40.955719 3417 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:56:40.956866 kubelet[3417]: I0123 23:56:40.956569 3417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:56:40.957522 containerd[2027]: time="2026-01-23T23:56:40.956303477Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:56:41.568936 sudo[2340]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:41.652684 sshd[2337]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:41.661375 systemd[1]: sshd@6-172.31.27.234:22-4.153.228.146:58910.service: Deactivated successfully. Jan 23 23:56:41.666809 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:56:41.667433 systemd[1]: session-7.scope: Consumed 11.552s CPU time, 153.4M memory peak, 0B memory swap peak. Jan 23 23:56:41.668711 systemd-logind[1997]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:56:41.671084 systemd-logind[1997]: Removed session 7. Jan 23 23:56:41.840178 systemd[1]: Created slice kubepods-burstable-pode8e792fa_7fa3_4407_8794_6c13475b955d.slice - libcontainer container kubepods-burstable-pode8e792fa_7fa3_4407_8794_6c13475b955d.slice. Jan 23 23:56:41.871901 systemd[1]: Created slice kubepods-besteffort-podf2571db0_6ab9_4e08_ac80_d1482f077764.slice - libcontainer container kubepods-besteffort-podf2571db0_6ab9_4e08_ac80_d1482f077764.slice. Jan 23 23:56:41.894401 kubelet[3417]: I0123 23:56:41.893570 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-kernel\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894401 kubelet[3417]: I0123 23:56:41.893642 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2571db0-6ab9-4e08-ac80-d1482f077764-lib-modules\") pod \"kube-proxy-rqgsz\" (UID: \"f2571db0-6ab9-4e08-ac80-d1482f077764\") " pod="kube-system/kube-proxy-rqgsz" Jan 23 23:56:41.894401 kubelet[3417]: I0123 23:56:41.893681 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4qsl\" (UniqueName: \"kubernetes.io/projected/f2571db0-6ab9-4e08-ac80-d1482f077764-kube-api-access-z4qsl\") pod \"kube-proxy-rqgsz\" (UID: \"f2571db0-6ab9-4e08-ac80-d1482f077764\") " pod="kube-system/kube-proxy-rqgsz" Jan 23 23:56:41.894401 kubelet[3417]: I0123 23:56:41.893781 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-run\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894401 kubelet[3417]: I0123 23:56:41.893829 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8e792fa-7fa3-4407-8794-6c13475b955d-clustermesh-secrets\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894992 kubelet[3417]: I0123 23:56:41.893872 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-config-path\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894992 kubelet[3417]: I0123 23:56:41.893907 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-hubble-tls\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894992 kubelet[3417]: I0123 23:56:41.893942 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-hostproc\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894992 kubelet[3417]: I0123 23:56:41.893976 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-cgroup\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894992 kubelet[3417]: I0123 23:56:41.894013 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cni-path\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.894992 kubelet[3417]: I0123 23:56:41.894047 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-net\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.895320 kubelet[3417]: I0123 23:56:41.894080 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2skq4\" (UniqueName: \"kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-kube-api-access-2skq4\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.895320 kubelet[3417]: I0123 23:56:41.894120 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2571db0-6ab9-4e08-ac80-d1482f077764-kube-proxy\") pod \"kube-proxy-rqgsz\" (UID: \"f2571db0-6ab9-4e08-ac80-d1482f077764\") " pod="kube-system/kube-proxy-rqgsz" Jan 23 23:56:41.895320 kubelet[3417]: I0123 23:56:41.894156 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2571db0-6ab9-4e08-ac80-d1482f077764-xtables-lock\") pod \"kube-proxy-rqgsz\" (UID: \"f2571db0-6ab9-4e08-ac80-d1482f077764\") " pod="kube-system/kube-proxy-rqgsz" Jan 23 23:56:41.895320 kubelet[3417]: I0123 23:56:41.894191 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-bpf-maps\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.895320 kubelet[3417]: I0123 23:56:41.894249 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-etc-cni-netd\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.895320 kubelet[3417]: I0123 23:56:41.894312 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-lib-modules\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:41.895597 kubelet[3417]: I0123 23:56:41.894349 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-xtables-lock\") pod \"cilium-x2mb5\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " pod="kube-system/cilium-x2mb5" Jan 23 23:56:42.154640 containerd[2027]: time="2026-01-23T23:56:42.154482375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2mb5,Uid:e8e792fa-7fa3-4407-8794-6c13475b955d,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:42.189970 containerd[2027]: time="2026-01-23T23:56:42.187809243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqgsz,Uid:f2571db0-6ab9-4e08-ac80-d1482f077764,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:42.235942 systemd[1]: Created slice kubepods-besteffort-pod8e90d03f_b3a7_42c9_8360_7fd9e1863b90.slice - libcontainer container kubepods-besteffort-pod8e90d03f_b3a7_42c9_8360_7fd9e1863b90.slice. Jan 23 23:56:42.256039 containerd[2027]: time="2026-01-23T23:56:42.255764967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:42.256039 containerd[2027]: time="2026-01-23T23:56:42.255953643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:42.256349 containerd[2027]: time="2026-01-23T23:56:42.256191063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:42.261363 containerd[2027]: time="2026-01-23T23:56:42.258453387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:42.298099 kubelet[3417]: I0123 23:56:42.298048 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tppkh\" (UID: \"8e90d03f-b3a7-42c9-8360-7fd9e1863b90\") " pod="kube-system/cilium-operator-6c4d7847fc-tppkh" Jan 23 23:56:42.299946 kubelet[3417]: I0123 23:56:42.299156 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v5zq\" (UniqueName: \"kubernetes.io/projected/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-kube-api-access-6v5zq\") pod \"cilium-operator-6c4d7847fc-tppkh\" (UID: \"8e90d03f-b3a7-42c9-8360-7fd9e1863b90\") " pod="kube-system/cilium-operator-6c4d7847fc-tppkh" Jan 23 23:56:42.310711 containerd[2027]: time="2026-01-23T23:56:42.304895440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:42.310711 containerd[2027]: time="2026-01-23T23:56:42.304996096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:42.310711 containerd[2027]: time="2026-01-23T23:56:42.305034988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:42.310711 containerd[2027]: time="2026-01-23T23:56:42.305195548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:42.329589 systemd[1]: Started cri-containerd-4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c.scope - libcontainer container 4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c. Jan 23 23:56:42.373557 systemd[1]: Started cri-containerd-c1faf680e5ea008210a9a2b0c97d6963a051b2827dbc530b2c51a28c5f1ebeab.scope - libcontainer container c1faf680e5ea008210a9a2b0c97d6963a051b2827dbc530b2c51a28c5f1ebeab. Jan 23 23:56:42.437923 containerd[2027]: time="2026-01-23T23:56:42.437710408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2mb5,Uid:e8e792fa-7fa3-4407-8794-6c13475b955d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\"" Jan 23 23:56:42.459934 containerd[2027]: time="2026-01-23T23:56:42.458409676Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 23:56:42.476564 containerd[2027]: time="2026-01-23T23:56:42.476505688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqgsz,Uid:f2571db0-6ab9-4e08-ac80-d1482f077764,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1faf680e5ea008210a9a2b0c97d6963a051b2827dbc530b2c51a28c5f1ebeab\"" Jan 23 23:56:42.488421 containerd[2027]: time="2026-01-23T23:56:42.488363945Z" level=info msg="CreateContainer within sandbox \"c1faf680e5ea008210a9a2b0c97d6963a051b2827dbc530b2c51a28c5f1ebeab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:56:42.524610 containerd[2027]: time="2026-01-23T23:56:42.524453993Z" level=info msg="CreateContainer within sandbox \"c1faf680e5ea008210a9a2b0c97d6963a051b2827dbc530b2c51a28c5f1ebeab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c858fe7d09ad81f6c3bd5a55ab2b3e6296b36aa96b58ceb6a0571e82981a4aeb\"" Jan 23 23:56:42.526316 containerd[2027]: time="2026-01-23T23:56:42.525983981Z" level=info msg="StartContainer for \"c858fe7d09ad81f6c3bd5a55ab2b3e6296b36aa96b58ceb6a0571e82981a4aeb\"" Jan 23 23:56:42.556261 containerd[2027]: time="2026-01-23T23:56:42.556002857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tppkh,Uid:8e90d03f-b3a7-42c9-8360-7fd9e1863b90,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:42.573780 systemd[1]: Started cri-containerd-c858fe7d09ad81f6c3bd5a55ab2b3e6296b36aa96b58ceb6a0571e82981a4aeb.scope - libcontainer container c858fe7d09ad81f6c3bd5a55ab2b3e6296b36aa96b58ceb6a0571e82981a4aeb. Jan 23 23:56:42.622263 containerd[2027]: time="2026-01-23T23:56:42.621390317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:42.622263 containerd[2027]: time="2026-01-23T23:56:42.621524525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:42.622875 containerd[2027]: time="2026-01-23T23:56:42.621595481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:42.622875 containerd[2027]: time="2026-01-23T23:56:42.622658369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:42.658589 containerd[2027]: time="2026-01-23T23:56:42.658446761Z" level=info msg="StartContainer for \"c858fe7d09ad81f6c3bd5a55ab2b3e6296b36aa96b58ceb6a0571e82981a4aeb\" returns successfully" Jan 23 23:56:42.667282 systemd[1]: Started cri-containerd-8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e.scope - libcontainer container 8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e. Jan 23 23:56:42.745633 containerd[2027]: time="2026-01-23T23:56:42.745260486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tppkh,Uid:8e90d03f-b3a7-42c9-8360-7fd9e1863b90,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\"" Jan 23 23:56:45.358755 kubelet[3417]: I0123 23:56:45.358655 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rqgsz" podStartSLOduration=4.358630567 podStartE2EDuration="4.358630567s" podCreationTimestamp="2026-01-23 23:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:43.406424477 +0000 UTC m=+5.433240868" watchObservedRunningTime="2026-01-23 23:56:45.358630567 +0000 UTC m=+7.385446982" Jan 23 23:56:47.250018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235684652.mount: Deactivated successfully. Jan 23 23:56:49.983431 containerd[2027]: time="2026-01-23T23:56:49.983361542Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:49.986457 containerd[2027]: time="2026-01-23T23:56:49.986395838Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 23:56:49.988961 containerd[2027]: time="2026-01-23T23:56:49.988884134Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:49.992431 containerd[2027]: time="2026-01-23T23:56:49.992351114Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.533839414s" Jan 23 23:56:49.992773 containerd[2027]: time="2026-01-23T23:56:49.992626838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 23:56:49.996727 containerd[2027]: time="2026-01-23T23:56:49.995367566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 23:56:50.003512 containerd[2027]: time="2026-01-23T23:56:50.002846422Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:56:50.036467 containerd[2027]: time="2026-01-23T23:56:50.036412282Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\"" Jan 23 23:56:50.039193 containerd[2027]: time="2026-01-23T23:56:50.037597726Z" level=info msg="StartContainer for \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\"" Jan 23 23:56:50.101545 systemd[1]: Started cri-containerd-30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a.scope - libcontainer container 30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a. Jan 23 23:56:50.155289 containerd[2027]: time="2026-01-23T23:56:50.154733207Z" level=info msg="StartContainer for \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\" returns successfully" Jan 23 23:56:50.186488 systemd[1]: cri-containerd-30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a.scope: Deactivated successfully. Jan 23 23:56:51.025647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a-rootfs.mount: Deactivated successfully. Jan 23 23:56:51.377081 containerd[2027]: time="2026-01-23T23:56:51.376704301Z" level=info msg="shim disconnected" id=30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a namespace=k8s.io Jan 23 23:56:51.377081 containerd[2027]: time="2026-01-23T23:56:51.376879309Z" level=warning msg="cleaning up after shim disconnected" id=30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a namespace=k8s.io Jan 23 23:56:51.378880 containerd[2027]: time="2026-01-23T23:56:51.376902337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:51.434627 containerd[2027]: time="2026-01-23T23:56:51.434558257Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:56:51.470952 containerd[2027]: time="2026-01-23T23:56:51.470874133Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\"" Jan 23 23:56:51.471782 containerd[2027]: time="2026-01-23T23:56:51.471734833Z" level=info msg="StartContainer for \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\"" Jan 23 23:56:51.530542 systemd[1]: Started cri-containerd-6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0.scope - libcontainer container 6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0. Jan 23 23:56:51.589284 containerd[2027]: time="2026-01-23T23:56:51.585549746Z" level=info msg="StartContainer for \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\" returns successfully" Jan 23 23:56:51.615405 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:56:51.616171 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:51.616328 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:51.626418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:51.627123 systemd[1]: cri-containerd-6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0.scope: Deactivated successfully. Jan 23 23:56:51.673249 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:51.696449 containerd[2027]: time="2026-01-23T23:56:51.696351818Z" level=info msg="shim disconnected" id=6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0 namespace=k8s.io Jan 23 23:56:51.696449 containerd[2027]: time="2026-01-23T23:56:51.696431654Z" level=warning msg="cleaning up after shim disconnected" id=6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0 namespace=k8s.io Jan 23 23:56:51.696449 containerd[2027]: time="2026-01-23T23:56:51.696454118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:52.026042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0-rootfs.mount: Deactivated successfully. Jan 23 23:56:52.443511 containerd[2027]: time="2026-01-23T23:56:52.443298734Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:56:52.493053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992241673.mount: Deactivated successfully. Jan 23 23:56:52.507052 containerd[2027]: time="2026-01-23T23:56:52.506975870Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\"" Jan 23 23:56:52.508441 containerd[2027]: time="2026-01-23T23:56:52.508381190Z" level=info msg="StartContainer for \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\"" Jan 23 23:56:52.583106 systemd[1]: Started cri-containerd-1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543.scope - libcontainer container 1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543. Jan 23 23:56:52.654422 containerd[2027]: time="2026-01-23T23:56:52.653853567Z" level=info msg="StartContainer for \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\" returns successfully" Jan 23 23:56:52.670892 containerd[2027]: time="2026-01-23T23:56:52.668360691Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:52.668822 systemd[1]: cri-containerd-1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543.scope: Deactivated successfully. Jan 23 23:56:52.674105 containerd[2027]: time="2026-01-23T23:56:52.674014839Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 23:56:52.676424 containerd[2027]: time="2026-01-23T23:56:52.676287207Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:52.687364 containerd[2027]: time="2026-01-23T23:56:52.687284679Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.691835837s" Jan 23 23:56:52.687568 containerd[2027]: time="2026-01-23T23:56:52.687368127Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 23:56:52.697699 containerd[2027]: time="2026-01-23T23:56:52.697431747Z" level=info msg="CreateContainer within sandbox \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 23:56:52.738475 containerd[2027]: time="2026-01-23T23:56:52.738395319Z" level=info msg="CreateContainer within sandbox \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\"" Jan 23 23:56:52.740583 containerd[2027]: time="2026-01-23T23:56:52.740510151Z" level=info msg="StartContainer for \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\"" Jan 23 23:56:52.797538 systemd[1]: Started cri-containerd-a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06.scope - libcontainer container a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06. Jan 23 23:56:52.813958 containerd[2027]: time="2026-01-23T23:56:52.813629920Z" level=info msg="shim disconnected" id=1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543 namespace=k8s.io Jan 23 23:56:52.813958 containerd[2027]: time="2026-01-23T23:56:52.813710344Z" level=warning msg="cleaning up after shim disconnected" id=1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543 namespace=k8s.io Jan 23 23:56:52.813958 containerd[2027]: time="2026-01-23T23:56:52.813734188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:52.866844 containerd[2027]: time="2026-01-23T23:56:52.866306728Z" level=info msg="StartContainer for \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\" returns successfully" Jan 23 23:56:53.032062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543-rootfs.mount: Deactivated successfully. Jan 23 23:56:53.447190 containerd[2027]: time="2026-01-23T23:56:53.446967555Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:56:53.485486 containerd[2027]: time="2026-01-23T23:56:53.483498699Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\"" Jan 23 23:56:53.485486 containerd[2027]: time="2026-01-23T23:56:53.484420791Z" level=info msg="StartContainer for \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\"" Jan 23 23:56:53.566560 systemd[1]: Started cri-containerd-994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452.scope - libcontainer container 994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452. Jan 23 23:56:53.679933 containerd[2027]: time="2026-01-23T23:56:53.679847908Z" level=info msg="StartContainer for \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\" returns successfully" Jan 23 23:56:53.681544 systemd[1]: cri-containerd-994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452.scope: Deactivated successfully. Jan 23 23:56:53.748075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452-rootfs.mount: Deactivated successfully. Jan 23 23:56:53.753899 containerd[2027]: time="2026-01-23T23:56:53.753756509Z" level=info msg="shim disconnected" id=994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452 namespace=k8s.io Jan 23 23:56:53.754353 containerd[2027]: time="2026-01-23T23:56:53.753860765Z" level=warning msg="cleaning up after shim disconnected" id=994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452 namespace=k8s.io Jan 23 23:56:53.754353 containerd[2027]: time="2026-01-23T23:56:53.754125245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:54.464385 containerd[2027]: time="2026-01-23T23:56:54.464317072Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:56:54.507784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477467603.mount: Deactivated successfully. Jan 23 23:56:54.519860 containerd[2027]: time="2026-01-23T23:56:54.519675520Z" level=info msg="CreateContainer within sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\"" Jan 23 23:56:54.522341 containerd[2027]: time="2026-01-23T23:56:54.522051064Z" level=info msg="StartContainer for \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\"" Jan 23 23:56:54.603572 kubelet[3417]: I0123 23:56:54.603193 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tppkh" podStartSLOduration=2.664402172 podStartE2EDuration="12.603170393s" podCreationTimestamp="2026-01-23 23:56:42 +0000 UTC" firstStartedPulling="2026-01-23 23:56:42.749918214 +0000 UTC m=+4.776734605" lastFinishedPulling="2026-01-23 23:56:52.688686435 +0000 UTC m=+14.715502826" observedRunningTime="2026-01-23 23:56:53.689937088 +0000 UTC m=+15.716753479" watchObservedRunningTime="2026-01-23 23:56:54.603170393 +0000 UTC m=+16.629986796" Jan 23 23:56:54.615610 systemd[1]: Started cri-containerd-50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329.scope - libcontainer container 50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329. Jan 23 23:56:54.682270 containerd[2027]: time="2026-01-23T23:56:54.681750581Z" level=info msg="StartContainer for \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\" returns successfully" Jan 23 23:56:54.861364 kubelet[3417]: I0123 23:56:54.860925 3417 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:56:54.945041 systemd[1]: Created slice kubepods-burstable-pod68e33b05_d007_4c43_9132_23af17f73307.slice - libcontainer container kubepods-burstable-pod68e33b05_d007_4c43_9132_23af17f73307.slice. Jan 23 23:56:54.966410 systemd[1]: Created slice kubepods-burstable-podec1321bc_efa8_4216_823a_c57d3afbb8c4.slice - libcontainer container kubepods-burstable-podec1321bc_efa8_4216_823a_c57d3afbb8c4.slice. Jan 23 23:56:55.005917 kubelet[3417]: I0123 23:56:55.005863 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxt78\" (UniqueName: \"kubernetes.io/projected/68e33b05-d007-4c43-9132-23af17f73307-kube-api-access-qxt78\") pod \"coredns-674b8bbfcf-vbd5w\" (UID: \"68e33b05-d007-4c43-9132-23af17f73307\") " pod="kube-system/coredns-674b8bbfcf-vbd5w" Jan 23 23:56:55.007159 kubelet[3417]: I0123 23:56:55.006161 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec1321bc-efa8-4216-823a-c57d3afbb8c4-config-volume\") pod \"coredns-674b8bbfcf-4csqz\" (UID: \"ec1321bc-efa8-4216-823a-c57d3afbb8c4\") " pod="kube-system/coredns-674b8bbfcf-4csqz" Jan 23 23:56:55.007159 kubelet[3417]: I0123 23:56:55.006235 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56xb7\" (UniqueName: \"kubernetes.io/projected/ec1321bc-efa8-4216-823a-c57d3afbb8c4-kube-api-access-56xb7\") pod \"coredns-674b8bbfcf-4csqz\" (UID: \"ec1321bc-efa8-4216-823a-c57d3afbb8c4\") " pod="kube-system/coredns-674b8bbfcf-4csqz" Jan 23 23:56:55.007159 kubelet[3417]: I0123 23:56:55.006288 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e33b05-d007-4c43-9132-23af17f73307-config-volume\") pod \"coredns-674b8bbfcf-vbd5w\" (UID: \"68e33b05-d007-4c43-9132-23af17f73307\") " pod="kube-system/coredns-674b8bbfcf-vbd5w" Jan 23 23:56:55.253481 containerd[2027]: time="2026-01-23T23:56:55.253178776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbd5w,Uid:68e33b05-d007-4c43-9132-23af17f73307,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:55.294267 containerd[2027]: time="2026-01-23T23:56:55.293362216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4csqz,Uid:ec1321bc-efa8-4216-823a-c57d3afbb8c4,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:57.911480 (udev-worker)[4211]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:57.913068 (udev-worker)[4209]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:57.913454 systemd-networkd[1917]: cilium_host: Link UP Jan 23 23:56:57.916752 systemd-networkd[1917]: cilium_net: Link UP Jan 23 23:56:57.917184 systemd-networkd[1917]: cilium_net: Gained carrier Jan 23 23:56:57.917566 systemd-networkd[1917]: cilium_host: Gained carrier Jan 23 23:56:58.093930 systemd-networkd[1917]: cilium_vxlan: Link UP Jan 23 23:56:58.093944 systemd-networkd[1917]: cilium_vxlan: Gained carrier Jan 23 23:56:58.468449 systemd-networkd[1917]: cilium_net: Gained IPv6LL Jan 23 23:56:58.702557 kernel: NET: Registered PF_ALG protocol family Jan 23 23:56:58.917737 systemd-networkd[1917]: cilium_host: Gained IPv6LL Jan 23 23:56:59.237175 systemd-networkd[1917]: cilium_vxlan: Gained IPv6LL Jan 23 23:57:00.034701 systemd-networkd[1917]: lxc_health: Link UP Jan 23 23:57:00.053067 systemd-networkd[1917]: lxc_health: Gained carrier Jan 23 23:57:00.202384 kubelet[3417]: I0123 23:57:00.202277 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x2mb5" podStartSLOduration=11.662096979 podStartE2EDuration="19.201206157s" podCreationTimestamp="2026-01-23 23:56:41 +0000 UTC" firstStartedPulling="2026-01-23 23:56:42.455877892 +0000 UTC m=+4.482694271" lastFinishedPulling="2026-01-23 23:56:49.994986914 +0000 UTC m=+12.021803449" observedRunningTime="2026-01-23 23:56:55.565011702 +0000 UTC m=+17.591828105" watchObservedRunningTime="2026-01-23 23:57:00.201206157 +0000 UTC m=+22.228022560" Jan 23 23:57:00.862549 systemd-networkd[1917]: lxc90a9307adcde: Link UP Jan 23 23:57:00.878517 kernel: eth0: renamed from tmp05d24 Jan 23 23:57:00.884814 systemd-networkd[1917]: lxc90a9307adcde: Gained carrier Jan 23 23:57:00.933431 (udev-worker)[4255]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:00.936065 (udev-worker)[4258]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:00.938342 systemd-networkd[1917]: lxca82f3c8c0f68: Link UP Jan 23 23:57:00.949272 kernel: eth0: renamed from tmp2741f Jan 23 23:57:00.954170 systemd-networkd[1917]: lxca82f3c8c0f68: Gained carrier Jan 23 23:57:02.053431 systemd-networkd[1917]: lxc_health: Gained IPv6LL Jan 23 23:57:02.436811 systemd-networkd[1917]: lxc90a9307adcde: Gained IPv6LL Jan 23 23:57:02.564604 systemd-networkd[1917]: lxca82f3c8c0f68: Gained IPv6LL Jan 23 23:57:05.179714 ntpd[1991]: Listen normally on 8 cilium_host 192.168.0.24:123 Jan 23 23:57:05.180579 ntpd[1991]: 23 Jan 23:57:05 ntpd[1991]: Listen normally on 8 cilium_host 192.168.0.24:123 Jan 23 23:57:05.180579 ntpd[1991]: 23 Jan 23:57:05 ntpd[1991]: Listen normally on 9 cilium_net [fe80::945e:e0ff:fed7:9893%4]:123 Jan 23 23:57:05.179851 ntpd[1991]: Listen normally on 9 cilium_net [fe80::945e:e0ff:fed7:9893%4]:123 Jan 23 23:57:05.180778 ntpd[1991]: Listen normally on 10 cilium_host [fe80::38d0:70ff:fe90:aa4b%5]:123 Jan 23 23:57:05.181764 ntpd[1991]: 23 Jan 23:57:05 ntpd[1991]: Listen normally on 10 cilium_host [fe80::38d0:70ff:fe90:aa4b%5]:123 Jan 23 23:57:05.181764 ntpd[1991]: 23 Jan 23:57:05 ntpd[1991]: Listen normally on 11 cilium_vxlan [fe80::f029:27ff:fe28:4c19%6]:123 Jan 23 23:57:05.181764 ntpd[1991]: 23 Jan 23:57:05 ntpd[1991]: Listen normally on 12 lxc_health [fe80::50c1:36ff:fe3d:1afb%8]:123 Jan 23 23:57:05.181764 ntpd[1991]: 23 Jan 23:57:05 ntpd[1991]: Listen normally on 13 lxc90a9307adcde [fe80::d4e0:2fff:fe34:afe0%10]:123 Jan 23 23:57:05.181764 ntpd[1991]: 23 Jan 23:57:05 ntpd[1991]: Listen normally on 14 lxca82f3c8c0f68 [fe80::44e1:9eff:fe44:c675%12]:123 Jan 23 23:57:05.180935 ntpd[1991]: Listen normally on 11 cilium_vxlan [fe80::f029:27ff:fe28:4c19%6]:123 Jan 23 23:57:05.181012 ntpd[1991]: Listen normally on 12 lxc_health [fe80::50c1:36ff:fe3d:1afb%8]:123 Jan 23 23:57:05.181089 ntpd[1991]: Listen normally on 13 lxc90a9307adcde [fe80::d4e0:2fff:fe34:afe0%10]:123 Jan 23 23:57:05.181159 ntpd[1991]: Listen normally on 14 lxca82f3c8c0f68 [fe80::44e1:9eff:fe44:c675%12]:123 Jan 23 23:57:09.862623 containerd[2027]: time="2026-01-23T23:57:09.860424933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:09.862623 containerd[2027]: time="2026-01-23T23:57:09.860667777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:09.862623 containerd[2027]: time="2026-01-23T23:57:09.860703957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:09.869444 containerd[2027]: time="2026-01-23T23:57:09.862543389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:09.883331 containerd[2027]: time="2026-01-23T23:57:09.874659105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:09.883331 containerd[2027]: time="2026-01-23T23:57:09.874767249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:09.883331 containerd[2027]: time="2026-01-23T23:57:09.874795689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:09.883331 containerd[2027]: time="2026-01-23T23:57:09.874968813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:09.952321 systemd[1]: run-containerd-runc-k8s.io-05d2406cbd6d06d63196004dbccbc0e646aa3ca4030cc5862b15dc6f7e9d95ae-runc.FU2f3u.mount: Deactivated successfully. Jan 23 23:57:09.970589 systemd[1]: Started cri-containerd-05d2406cbd6d06d63196004dbccbc0e646aa3ca4030cc5862b15dc6f7e9d95ae.scope - libcontainer container 05d2406cbd6d06d63196004dbccbc0e646aa3ca4030cc5862b15dc6f7e9d95ae. Jan 23 23:57:09.975795 systemd[1]: Started cri-containerd-2741fb3a69edd0934374644d7dc578c3bc32548eed8bbcb019349017ba00fae2.scope - libcontainer container 2741fb3a69edd0934374644d7dc578c3bc32548eed8bbcb019349017ba00fae2. Jan 23 23:57:10.115925 containerd[2027]: time="2026-01-23T23:57:10.114296058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbd5w,Uid:68e33b05-d007-4c43-9132-23af17f73307,Namespace:kube-system,Attempt:0,} returns sandbox id \"05d2406cbd6d06d63196004dbccbc0e646aa3ca4030cc5862b15dc6f7e9d95ae\"" Jan 23 23:57:10.133610 containerd[2027]: time="2026-01-23T23:57:10.133541058Z" level=info msg="CreateContainer within sandbox \"05d2406cbd6d06d63196004dbccbc0e646aa3ca4030cc5862b15dc6f7e9d95ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:10.134002 containerd[2027]: time="2026-01-23T23:57:10.133906830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4csqz,Uid:ec1321bc-efa8-4216-823a-c57d3afbb8c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2741fb3a69edd0934374644d7dc578c3bc32548eed8bbcb019349017ba00fae2\"" Jan 23 23:57:10.147557 containerd[2027]: time="2026-01-23T23:57:10.147498762Z" level=info msg="CreateContainer within sandbox \"2741fb3a69edd0934374644d7dc578c3bc32548eed8bbcb019349017ba00fae2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:10.188872 containerd[2027]: time="2026-01-23T23:57:10.188693478Z" level=info msg="CreateContainer within sandbox \"2741fb3a69edd0934374644d7dc578c3bc32548eed8bbcb019349017ba00fae2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"657a449f12ed13b41b9e7efc490741775df1d27f785b4819e94267726264f731\"" Jan 23 23:57:10.190581 containerd[2027]: time="2026-01-23T23:57:10.190366206Z" level=info msg="StartContainer for \"657a449f12ed13b41b9e7efc490741775df1d27f785b4819e94267726264f731\"" Jan 23 23:57:10.202370 containerd[2027]: time="2026-01-23T23:57:10.202179282Z" level=info msg="CreateContainer within sandbox \"05d2406cbd6d06d63196004dbccbc0e646aa3ca4030cc5862b15dc6f7e9d95ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f99b42b6b41cad0521955c572c5171c6bb52d47180f91560a764a79dd342252a\"" Jan 23 23:57:10.206429 containerd[2027]: time="2026-01-23T23:57:10.206082414Z" level=info msg="StartContainer for \"f99b42b6b41cad0521955c572c5171c6bb52d47180f91560a764a79dd342252a\"" Jan 23 23:57:10.286687 systemd[1]: Started cri-containerd-f99b42b6b41cad0521955c572c5171c6bb52d47180f91560a764a79dd342252a.scope - libcontainer container f99b42b6b41cad0521955c572c5171c6bb52d47180f91560a764a79dd342252a. Jan 23 23:57:10.319592 systemd[1]: Started cri-containerd-657a449f12ed13b41b9e7efc490741775df1d27f785b4819e94267726264f731.scope - libcontainer container 657a449f12ed13b41b9e7efc490741775df1d27f785b4819e94267726264f731. Jan 23 23:57:10.440870 containerd[2027]: time="2026-01-23T23:57:10.440807299Z" level=info msg="StartContainer for \"f99b42b6b41cad0521955c572c5171c6bb52d47180f91560a764a79dd342252a\" returns successfully" Jan 23 23:57:10.461588 containerd[2027]: time="2026-01-23T23:57:10.461528743Z" level=info msg="StartContainer for \"657a449f12ed13b41b9e7efc490741775df1d27f785b4819e94267726264f731\" returns successfully" Jan 23 23:57:10.562249 kubelet[3417]: I0123 23:57:10.561639 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vbd5w" podStartSLOduration=28.561617036 podStartE2EDuration="28.561617036s" podCreationTimestamp="2026-01-23 23:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:10.55745084 +0000 UTC m=+32.584267327" watchObservedRunningTime="2026-01-23 23:57:10.561617036 +0000 UTC m=+32.588433439" Jan 23 23:57:10.598927 kubelet[3417]: I0123 23:57:10.598831 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4csqz" podStartSLOduration=28.598808828 podStartE2EDuration="28.598808828s" podCreationTimestamp="2026-01-23 23:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:10.596437988 +0000 UTC m=+32.623254643" watchObservedRunningTime="2026-01-23 23:57:10.598808828 +0000 UTC m=+32.625625219" Jan 23 23:57:20.527848 systemd[1]: Started sshd@7-172.31.27.234:22-4.153.228.146:49930.service - OpenSSH per-connection server daemon (4.153.228.146:49930). Jan 23 23:57:21.033556 sshd[4796]: Accepted publickey for core from 4.153.228.146 port 49930 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:21.036608 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:21.047265 systemd-logind[1997]: New session 8 of user core. Jan 23 23:57:21.055910 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:57:21.550711 sshd[4796]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:21.556977 systemd[1]: sshd@7-172.31.27.234:22-4.153.228.146:49930.service: Deactivated successfully. Jan 23 23:57:21.562005 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:57:21.566762 systemd-logind[1997]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:57:21.569588 systemd-logind[1997]: Removed session 8. Jan 23 23:57:26.656863 systemd[1]: Started sshd@8-172.31.27.234:22-4.153.228.146:37428.service - OpenSSH per-connection server daemon (4.153.228.146:37428). Jan 23 23:57:27.184632 sshd[4809]: Accepted publickey for core from 4.153.228.146 port 37428 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:27.187311 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:27.194811 systemd-logind[1997]: New session 9 of user core. Jan 23 23:57:27.207476 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:57:27.679609 sshd[4809]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:27.687167 systemd[1]: sshd@8-172.31.27.234:22-4.153.228.146:37428.service: Deactivated successfully. Jan 23 23:57:27.691392 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:57:27.693632 systemd-logind[1997]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:57:27.695848 systemd-logind[1997]: Removed session 9. Jan 23 23:57:32.781860 systemd[1]: Started sshd@9-172.31.27.234:22-4.153.228.146:37434.service - OpenSSH per-connection server daemon (4.153.228.146:37434). Jan 23 23:57:33.318941 sshd[4823]: Accepted publickey for core from 4.153.228.146 port 37434 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:33.322742 sshd[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:33.332411 systemd-logind[1997]: New session 10 of user core. Jan 23 23:57:33.341502 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:57:33.823572 sshd[4823]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:33.831841 systemd[1]: sshd@9-172.31.27.234:22-4.153.228.146:37434.service: Deactivated successfully. Jan 23 23:57:33.836610 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:57:33.839629 systemd-logind[1997]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:57:33.841871 systemd-logind[1997]: Removed session 10. Jan 23 23:57:38.918780 systemd[1]: Started sshd@10-172.31.27.234:22-4.153.228.146:58862.service - OpenSSH per-connection server daemon (4.153.228.146:58862). Jan 23 23:57:39.429675 sshd[4838]: Accepted publickey for core from 4.153.228.146 port 58862 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:39.433193 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:39.442793 systemd-logind[1997]: New session 11 of user core. Jan 23 23:57:39.450559 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:57:39.914294 sshd[4838]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:39.920522 systemd[1]: sshd@10-172.31.27.234:22-4.153.228.146:58862.service: Deactivated successfully. Jan 23 23:57:39.924729 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:57:39.930115 systemd-logind[1997]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:57:39.932806 systemd-logind[1997]: Removed session 11. Jan 23 23:57:40.014890 systemd[1]: Started sshd@11-172.31.27.234:22-4.153.228.146:58874.service - OpenSSH per-connection server daemon (4.153.228.146:58874). Jan 23 23:57:40.517183 sshd[4851]: Accepted publickey for core from 4.153.228.146 port 58874 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:40.520374 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:40.531380 systemd-logind[1997]: New session 12 of user core. Jan 23 23:57:40.538598 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:57:41.084652 sshd[4851]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:41.093044 systemd[1]: sshd@11-172.31.27.234:22-4.153.228.146:58874.service: Deactivated successfully. Jan 23 23:57:41.097246 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:57:41.099383 systemd-logind[1997]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:57:41.102557 systemd-logind[1997]: Removed session 12. Jan 23 23:57:41.182834 systemd[1]: Started sshd@12-172.31.27.234:22-4.153.228.146:58880.service - OpenSSH per-connection server daemon (4.153.228.146:58880). Jan 23 23:57:41.688866 sshd[4862]: Accepted publickey for core from 4.153.228.146 port 58880 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:41.692009 sshd[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:41.702331 systemd-logind[1997]: New session 13 of user core. Jan 23 23:57:41.709544 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:57:42.169981 sshd[4862]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:42.175626 systemd[1]: sshd@12-172.31.27.234:22-4.153.228.146:58880.service: Deactivated successfully. Jan 23 23:57:42.179590 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:57:42.183963 systemd-logind[1997]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:57:42.186630 systemd-logind[1997]: Removed session 13. Jan 23 23:57:47.285753 systemd[1]: Started sshd@13-172.31.27.234:22-4.153.228.146:33064.service - OpenSSH per-connection server daemon (4.153.228.146:33064). Jan 23 23:57:47.821078 sshd[4878]: Accepted publickey for core from 4.153.228.146 port 33064 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:47.823956 sshd[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:47.833830 systemd-logind[1997]: New session 14 of user core. Jan 23 23:57:47.842882 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:57:48.325174 sshd[4878]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:48.331743 systemd[1]: sshd@13-172.31.27.234:22-4.153.228.146:33064.service: Deactivated successfully. Jan 23 23:57:48.335733 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:57:48.338773 systemd-logind[1997]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:57:48.341078 systemd-logind[1997]: Removed session 14. Jan 23 23:57:53.426809 systemd[1]: Started sshd@14-172.31.27.234:22-4.153.228.146:33070.service - OpenSSH per-connection server daemon (4.153.228.146:33070). Jan 23 23:57:53.966495 sshd[4891]: Accepted publickey for core from 4.153.228.146 port 33070 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:53.969376 sshd[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:53.977769 systemd-logind[1997]: New session 15 of user core. Jan 23 23:57:53.987545 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:57:54.491474 sshd[4891]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:54.497409 systemd[1]: sshd@14-172.31.27.234:22-4.153.228.146:33070.service: Deactivated successfully. Jan 23 23:57:54.504816 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:57:54.510334 systemd-logind[1997]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:57:54.512545 systemd-logind[1997]: Removed session 15. Jan 23 23:57:59.589765 systemd[1]: Started sshd@15-172.31.27.234:22-4.153.228.146:43882.service - OpenSSH per-connection server daemon (4.153.228.146:43882). Jan 23 23:58:00.130076 sshd[4905]: Accepted publickey for core from 4.153.228.146 port 43882 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:00.133087 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:00.142739 systemd-logind[1997]: New session 16 of user core. Jan 23 23:58:00.148556 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:58:00.646447 sshd[4905]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:00.654158 systemd-logind[1997]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:58:00.655990 systemd[1]: sshd@15-172.31.27.234:22-4.153.228.146:43882.service: Deactivated successfully. Jan 23 23:58:00.661571 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:58:00.665375 systemd-logind[1997]: Removed session 16. Jan 23 23:58:00.739683 systemd[1]: Started sshd@16-172.31.27.234:22-4.153.228.146:43884.service - OpenSSH per-connection server daemon (4.153.228.146:43884). Jan 23 23:58:01.251468 sshd[4918]: Accepted publickey for core from 4.153.228.146 port 43884 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:01.253997 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:01.262947 systemd-logind[1997]: New session 17 of user core. Jan 23 23:58:01.273494 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:58:01.815704 sshd[4918]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:01.822036 systemd[1]: sshd@16-172.31.27.234:22-4.153.228.146:43884.service: Deactivated successfully. Jan 23 23:58:01.829476 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:58:01.831078 systemd-logind[1997]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:58:01.833080 systemd-logind[1997]: Removed session 17. Jan 23 23:58:01.925814 systemd[1]: Started sshd@17-172.31.27.234:22-4.153.228.146:43886.service - OpenSSH per-connection server daemon (4.153.228.146:43886). Jan 23 23:58:02.471646 sshd[4929]: Accepted publickey for core from 4.153.228.146 port 43886 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:02.474644 sshd[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:02.483268 systemd-logind[1997]: New session 18 of user core. Jan 23 23:58:02.492554 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:58:03.757607 sshd[4929]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:03.764934 systemd[1]: sshd@17-172.31.27.234:22-4.153.228.146:43886.service: Deactivated successfully. Jan 23 23:58:03.771304 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:58:03.774789 systemd-logind[1997]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:58:03.777338 systemd-logind[1997]: Removed session 18. Jan 23 23:58:03.848772 systemd[1]: Started sshd@18-172.31.27.234:22-4.153.228.146:43902.service - OpenSSH per-connection server daemon (4.153.228.146:43902). Jan 23 23:58:04.355379 sshd[4947]: Accepted publickey for core from 4.153.228.146 port 43902 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:04.359646 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:04.368184 systemd-logind[1997]: New session 19 of user core. Jan 23 23:58:04.377556 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:58:05.104163 sshd[4947]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:05.113130 systemd-logind[1997]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:58:05.114607 systemd[1]: sshd@18-172.31.27.234:22-4.153.228.146:43902.service: Deactivated successfully. Jan 23 23:58:05.119064 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:58:05.126783 systemd-logind[1997]: Removed session 19. Jan 23 23:58:05.201835 systemd[1]: Started sshd@19-172.31.27.234:22-4.153.228.146:38622.service - OpenSSH per-connection server daemon (4.153.228.146:38622). Jan 23 23:58:05.707806 sshd[4958]: Accepted publickey for core from 4.153.228.146 port 38622 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:05.710562 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:05.720342 systemd-logind[1997]: New session 20 of user core. Jan 23 23:58:05.730510 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:58:06.174383 sshd[4958]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:06.180415 systemd[1]: sshd@19-172.31.27.234:22-4.153.228.146:38622.service: Deactivated successfully. Jan 23 23:58:06.185410 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:58:06.186979 systemd-logind[1997]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:58:06.189655 systemd-logind[1997]: Removed session 20. Jan 23 23:58:11.267798 systemd[1]: Started sshd@20-172.31.27.234:22-4.153.228.146:38630.service - OpenSSH per-connection server daemon (4.153.228.146:38630). Jan 23 23:58:11.771086 sshd[4972]: Accepted publickey for core from 4.153.228.146 port 38630 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:11.773831 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:11.783365 systemd-logind[1997]: New session 21 of user core. Jan 23 23:58:11.794546 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:58:12.241509 sshd[4972]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:12.248723 systemd[1]: sshd@20-172.31.27.234:22-4.153.228.146:38630.service: Deactivated successfully. Jan 23 23:58:12.253562 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:58:12.259062 systemd-logind[1997]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:58:12.261475 systemd-logind[1997]: Removed session 21. Jan 23 23:58:17.354681 systemd[1]: Started sshd@21-172.31.27.234:22-4.153.228.146:43968.service - OpenSSH per-connection server daemon (4.153.228.146:43968). Jan 23 23:58:17.896615 sshd[4987]: Accepted publickey for core from 4.153.228.146 port 43968 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:17.899752 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:17.908315 systemd-logind[1997]: New session 22 of user core. Jan 23 23:58:17.917532 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:58:18.387136 sshd[4987]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:18.394569 systemd-logind[1997]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:58:18.397461 systemd[1]: sshd@21-172.31.27.234:22-4.153.228.146:43968.service: Deactivated successfully. Jan 23 23:58:18.403335 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:58:18.410539 systemd-logind[1997]: Removed session 22. Jan 23 23:58:23.489726 systemd[1]: Started sshd@22-172.31.27.234:22-4.153.228.146:43976.service - OpenSSH per-connection server daemon (4.153.228.146:43976). Jan 23 23:58:24.017897 sshd[5000]: Accepted publickey for core from 4.153.228.146 port 43976 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:24.020418 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:24.028371 systemd-logind[1997]: New session 23 of user core. Jan 23 23:58:24.041499 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:58:24.530463 sshd[5000]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:24.537038 systemd[1]: sshd@22-172.31.27.234:22-4.153.228.146:43976.service: Deactivated successfully. Jan 23 23:58:24.540333 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:58:24.542322 systemd-logind[1997]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:58:24.545293 systemd-logind[1997]: Removed session 23. Jan 23 23:58:24.622720 systemd[1]: Started sshd@23-172.31.27.234:22-4.153.228.146:52830.service - OpenSSH per-connection server daemon (4.153.228.146:52830). Jan 23 23:58:25.111305 sshd[5012]: Accepted publickey for core from 4.153.228.146 port 52830 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:25.114060 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:25.123296 systemd-logind[1997]: New session 24 of user core. Jan 23 23:58:25.125505 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:58:28.694887 containerd[2027]: time="2026-01-23T23:58:28.694748580Z" level=info msg="StopContainer for \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\" with timeout 30 (s)" Jan 23 23:58:28.697553 containerd[2027]: time="2026-01-23T23:58:28.697114692Z" level=info msg="Stop container \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\" with signal terminated" Jan 23 23:58:28.748689 containerd[2027]: time="2026-01-23T23:58:28.748593396Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:58:28.762572 systemd[1]: cri-containerd-a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06.scope: Deactivated successfully. Jan 23 23:58:28.768604 containerd[2027]: time="2026-01-23T23:58:28.768117216Z" level=info msg="StopContainer for \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\" with timeout 2 (s)" Jan 23 23:58:28.768759 containerd[2027]: time="2026-01-23T23:58:28.768630612Z" level=info msg="Stop container \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\" with signal terminated" Jan 23 23:58:28.785636 systemd-networkd[1917]: lxc_health: Link DOWN Jan 23 23:58:28.785650 systemd-networkd[1917]: lxc_health: Lost carrier Jan 23 23:58:28.825571 systemd[1]: cri-containerd-50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329.scope: Deactivated successfully. Jan 23 23:58:28.827711 systemd[1]: cri-containerd-50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329.scope: Consumed 15.253s CPU time. Jan 23 23:58:28.850815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06-rootfs.mount: Deactivated successfully. Jan 23 23:58:28.865738 containerd[2027]: time="2026-01-23T23:58:28.865388401Z" level=info msg="shim disconnected" id=a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06 namespace=k8s.io Jan 23 23:58:28.865738 containerd[2027]: time="2026-01-23T23:58:28.865464625Z" level=warning msg="cleaning up after shim disconnected" id=a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06 namespace=k8s.io Jan 23 23:58:28.865738 containerd[2027]: time="2026-01-23T23:58:28.865485649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:28.885832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329-rootfs.mount: Deactivated successfully. Jan 23 23:58:28.893020 containerd[2027]: time="2026-01-23T23:58:28.892938253Z" level=info msg="shim disconnected" id=50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329 namespace=k8s.io Jan 23 23:58:28.893020 containerd[2027]: time="2026-01-23T23:58:28.893015185Z" level=warning msg="cleaning up after shim disconnected" id=50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329 namespace=k8s.io Jan 23 23:58:28.893591 containerd[2027]: time="2026-01-23T23:58:28.893038585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:28.910867 containerd[2027]: time="2026-01-23T23:58:28.910781749Z" level=info msg="StopContainer for \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\" returns successfully" Jan 23 23:58:28.912246 containerd[2027]: time="2026-01-23T23:58:28.911606917Z" level=info msg="StopPodSandbox for \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\"" Jan 23 23:58:28.912246 containerd[2027]: time="2026-01-23T23:58:28.911679493Z" level=info msg="Container to stop \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:28.919949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e-shm.mount: Deactivated successfully. Jan 23 23:58:28.938634 containerd[2027]: time="2026-01-23T23:58:28.938393737Z" level=info msg="StopContainer for \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\" returns successfully" Jan 23 23:58:28.939132 systemd[1]: cri-containerd-8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e.scope: Deactivated successfully. Jan 23 23:58:28.944041 containerd[2027]: time="2026-01-23T23:58:28.943976581Z" level=info msg="StopPodSandbox for \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\"" Jan 23 23:58:28.944185 containerd[2027]: time="2026-01-23T23:58:28.944054101Z" level=info msg="Container to stop \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:28.944185 containerd[2027]: time="2026-01-23T23:58:28.944084281Z" level=info msg="Container to stop \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:28.944185 containerd[2027]: time="2026-01-23T23:58:28.944107537Z" level=info msg="Container to stop \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:28.944185 containerd[2027]: time="2026-01-23T23:58:28.944136337Z" level=info msg="Container to stop \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:28.944185 containerd[2027]: time="2026-01-23T23:58:28.944159425Z" level=info msg="Container to stop \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 23:58:28.954587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c-shm.mount: Deactivated successfully. Jan 23 23:58:28.993498 systemd[1]: cri-containerd-4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c.scope: Deactivated successfully. Jan 23 23:58:29.025896 containerd[2027]: time="2026-01-23T23:58:29.025787590Z" level=info msg="shim disconnected" id=8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e namespace=k8s.io Jan 23 23:58:29.025896 containerd[2027]: time="2026-01-23T23:58:29.025866550Z" level=warning msg="cleaning up after shim disconnected" id=8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e namespace=k8s.io Jan 23 23:58:29.025896 containerd[2027]: time="2026-01-23T23:58:29.025891438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:29.053776 containerd[2027]: time="2026-01-23T23:58:29.053695630Z" level=info msg="shim disconnected" id=4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c namespace=k8s.io Jan 23 23:58:29.054099 containerd[2027]: time="2026-01-23T23:58:29.054053914Z" level=warning msg="cleaning up after shim disconnected" id=4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c namespace=k8s.io Jan 23 23:58:29.054332 containerd[2027]: time="2026-01-23T23:58:29.054194974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:29.064581 containerd[2027]: time="2026-01-23T23:58:29.064504858Z" level=info msg="TearDown network for sandbox \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" successfully" Jan 23 23:58:29.064581 containerd[2027]: time="2026-01-23T23:58:29.064567174Z" level=info msg="StopPodSandbox for \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" returns successfully" Jan 23 23:58:29.093928 containerd[2027]: time="2026-01-23T23:58:29.093834766Z" level=info msg="TearDown network for sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" successfully" Jan 23 23:58:29.094203 containerd[2027]: time="2026-01-23T23:58:29.094068730Z" level=info msg="StopPodSandbox for \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" returns successfully" Jan 23 23:58:29.175634 kubelet[3417]: I0123 23:58:29.175546 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-lib-modules\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177030 kubelet[3417]: I0123 23:58:29.175764 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-cilium-config-path\") pod \"8e90d03f-b3a7-42c9-8360-7fd9e1863b90\" (UID: \"8e90d03f-b3a7-42c9-8360-7fd9e1863b90\") " Jan 23 23:58:29.177030 kubelet[3417]: I0123 23:58:29.175800 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-run\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177030 kubelet[3417]: I0123 23:58:29.175851 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-hubble-tls\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177030 kubelet[3417]: I0123 23:58:29.175885 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-etc-cni-netd\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177030 kubelet[3417]: I0123 23:58:29.175921 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2skq4\" (UniqueName: \"kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-kube-api-access-2skq4\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177030 kubelet[3417]: I0123 23:58:29.175961 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-config-path\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177467 kubelet[3417]: I0123 23:58:29.175995 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-xtables-lock\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177467 kubelet[3417]: I0123 23:58:29.176028 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-cgroup\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177467 kubelet[3417]: I0123 23:58:29.176074 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8e792fa-7fa3-4407-8794-6c13475b955d-clustermesh-secrets\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177467 kubelet[3417]: I0123 23:58:29.176108 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-net\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177467 kubelet[3417]: I0123 23:58:29.176145 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cni-path\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177467 kubelet[3417]: I0123 23:58:29.176179 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-hostproc\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177780 kubelet[3417]: I0123 23:58:29.176262 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-kernel\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.177780 kubelet[3417]: I0123 23:58:29.175698 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.177780 kubelet[3417]: I0123 23:58:29.176317 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.179890 kubelet[3417]: I0123 23:58:29.178888 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.179890 kubelet[3417]: I0123 23:58:29.179771 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.183712 kubelet[3417]: I0123 23:58:29.183436 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.183712 kubelet[3417]: I0123 23:58:29.183524 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.189260 kubelet[3417]: I0123 23:58:29.188293 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.189260 kubelet[3417]: I0123 23:58:29.188379 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cni-path" (OuterVolumeSpecName: "cni-path") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.189260 kubelet[3417]: I0123 23:58:29.188419 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-hostproc" (OuterVolumeSpecName: "hostproc") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.189260 kubelet[3417]: I0123 23:58:29.176326 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v5zq\" (UniqueName: \"kubernetes.io/projected/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-kube-api-access-6v5zq\") pod \"8e90d03f-b3a7-42c9-8360-7fd9e1863b90\" (UID: \"8e90d03f-b3a7-42c9-8360-7fd9e1863b90\") " Jan 23 23:58:29.189260 kubelet[3417]: I0123 23:58:29.188492 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-bpf-maps\") pod \"e8e792fa-7fa3-4407-8794-6c13475b955d\" (UID: \"e8e792fa-7fa3-4407-8794-6c13475b955d\") " Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188566 3417 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-etc-cni-netd\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188591 3417 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-xtables-lock\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188619 3417 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-cgroup\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188642 3417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-net\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188666 3417 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cni-path\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188687 3417 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-hostproc\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188708 3417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-host-proc-sys-kernel\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.189626 kubelet[3417]: I0123 23:58:29.188729 3417 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-lib-modules\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.190015 kubelet[3417]: I0123 23:58:29.188750 3417 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-run\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.190015 kubelet[3417]: I0123 23:58:29.188787 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 23:58:29.190015 kubelet[3417]: I0123 23:58:29.188930 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-kube-api-access-6v5zq" (OuterVolumeSpecName: "kube-api-access-6v5zq") pod "8e90d03f-b3a7-42c9-8360-7fd9e1863b90" (UID: "8e90d03f-b3a7-42c9-8360-7fd9e1863b90"). InnerVolumeSpecName "kube-api-access-6v5zq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:29.195097 kubelet[3417]: I0123 23:58:29.195037 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e90d03f-b3a7-42c9-8360-7fd9e1863b90" (UID: "8e90d03f-b3a7-42c9-8360-7fd9e1863b90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:58:29.197272 kubelet[3417]: I0123 23:58:29.197175 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e792fa-7fa3-4407-8794-6c13475b955d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:58:29.197590 kubelet[3417]: I0123 23:58:29.197537 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-kube-api-access-2skq4" (OuterVolumeSpecName: "kube-api-access-2skq4") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "kube-api-access-2skq4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:29.197703 kubelet[3417]: I0123 23:58:29.197568 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:58:29.199353 kubelet[3417]: I0123 23:58:29.199300 3417 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8e792fa-7fa3-4407-8794-6c13475b955d" (UID: "e8e792fa-7fa3-4407-8794-6c13475b955d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:58:29.289384 kubelet[3417]: I0123 23:58:29.289203 3417 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-cilium-config-path\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.289384 kubelet[3417]: I0123 23:58:29.289288 3417 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-hubble-tls\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.289384 kubelet[3417]: I0123 23:58:29.289311 3417 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2skq4\" (UniqueName: \"kubernetes.io/projected/e8e792fa-7fa3-4407-8794-6c13475b955d-kube-api-access-2skq4\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.289384 kubelet[3417]: I0123 23:58:29.289338 3417 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8e792fa-7fa3-4407-8794-6c13475b955d-cilium-config-path\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.289671 kubelet[3417]: I0123 23:58:29.289361 3417 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8e792fa-7fa3-4407-8794-6c13475b955d-clustermesh-secrets\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.289752 kubelet[3417]: I0123 23:58:29.289675 3417 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6v5zq\" (UniqueName: \"kubernetes.io/projected/8e90d03f-b3a7-42c9-8360-7fd9e1863b90-kube-api-access-6v5zq\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.289752 kubelet[3417]: I0123 23:58:29.289701 3417 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8e792fa-7fa3-4407-8794-6c13475b955d-bpf-maps\") on node \"ip-172-31-27-234\" DevicePath \"\"" Jan 23 23:58:29.704156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e-rootfs.mount: Deactivated successfully. Jan 23 23:58:29.704376 systemd[1]: var-lib-kubelet-pods-8e90d03f\x2db3a7\x2d42c9\x2d8360\x2d7fd9e1863b90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6v5zq.mount: Deactivated successfully. Jan 23 23:58:29.704516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c-rootfs.mount: Deactivated successfully. Jan 23 23:58:29.704663 systemd[1]: var-lib-kubelet-pods-e8e792fa\x2d7fa3\x2d4407\x2d8794\x2d6c13475b955d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2skq4.mount: Deactivated successfully. Jan 23 23:58:29.707316 systemd[1]: var-lib-kubelet-pods-e8e792fa\x2d7fa3\x2d4407\x2d8794\x2d6c13475b955d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 23:58:29.707496 systemd[1]: var-lib-kubelet-pods-e8e792fa\x2d7fa3\x2d4407\x2d8794\x2d6c13475b955d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 23:58:29.755801 kubelet[3417]: I0123 23:58:29.755655 3417 scope.go:117] "RemoveContainer" containerID="50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329" Jan 23 23:58:29.767324 containerd[2027]: time="2026-01-23T23:58:29.766770613Z" level=info msg="RemoveContainer for \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\"" Jan 23 23:58:29.780649 containerd[2027]: time="2026-01-23T23:58:29.779335861Z" level=info msg="RemoveContainer for \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\" returns successfully" Jan 23 23:58:29.779850 systemd[1]: Removed slice kubepods-burstable-pode8e792fa_7fa3_4407_8794_6c13475b955d.slice - libcontainer container kubepods-burstable-pode8e792fa_7fa3_4407_8794_6c13475b955d.slice. Jan 23 23:58:29.780067 systemd[1]: kubepods-burstable-pode8e792fa_7fa3_4407_8794_6c13475b955d.slice: Consumed 15.414s CPU time. Jan 23 23:58:29.801627 kubelet[3417]: I0123 23:58:29.792249 3417 scope.go:117] "RemoveContainer" containerID="994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452" Jan 23 23:58:29.805310 systemd[1]: Removed slice kubepods-besteffort-pod8e90d03f_b3a7_42c9_8360_7fd9e1863b90.slice - libcontainer container kubepods-besteffort-pod8e90d03f_b3a7_42c9_8360_7fd9e1863b90.slice. Jan 23 23:58:29.809249 containerd[2027]: time="2026-01-23T23:58:29.809142722Z" level=info msg="RemoveContainer for \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\"" Jan 23 23:58:29.822242 containerd[2027]: time="2026-01-23T23:58:29.820002026Z" level=info msg="RemoveContainer for \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\" returns successfully" Jan 23 23:58:29.822378 kubelet[3417]: I0123 23:58:29.820504 3417 scope.go:117] "RemoveContainer" containerID="1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543" Jan 23 23:58:29.837768 containerd[2027]: time="2026-01-23T23:58:29.837506054Z" level=info msg="RemoveContainer for \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\"" Jan 23 23:58:29.845304 containerd[2027]: time="2026-01-23T23:58:29.845253974Z" level=info msg="RemoveContainer for \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\" returns successfully" Jan 23 23:58:29.845815 kubelet[3417]: I0123 23:58:29.845776 3417 scope.go:117] "RemoveContainer" containerID="6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0" Jan 23 23:58:29.851429 containerd[2027]: time="2026-01-23T23:58:29.851382494Z" level=info msg="RemoveContainer for \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\"" Jan 23 23:58:29.860062 containerd[2027]: time="2026-01-23T23:58:29.859896638Z" level=info msg="RemoveContainer for \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\" returns successfully" Jan 23 23:58:29.860335 kubelet[3417]: I0123 23:58:29.860281 3417 scope.go:117] "RemoveContainer" containerID="30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a" Jan 23 23:58:29.862838 containerd[2027]: time="2026-01-23T23:58:29.862451666Z" level=info msg="RemoveContainer for \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\"" Jan 23 23:58:29.868932 containerd[2027]: time="2026-01-23T23:58:29.868880930Z" level=info msg="RemoveContainer for \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\" returns successfully" Jan 23 23:58:29.869433 kubelet[3417]: I0123 23:58:29.869403 3417 scope.go:117] "RemoveContainer" containerID="50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329" Jan 23 23:58:29.869979 containerd[2027]: time="2026-01-23T23:58:29.869896178Z" level=error msg="ContainerStatus for \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\": not found" Jan 23 23:58:29.870174 kubelet[3417]: E0123 23:58:29.870143 3417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\": not found" containerID="50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329" Jan 23 23:58:29.870292 kubelet[3417]: I0123 23:58:29.870193 3417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329"} err="failed to get container status \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\": rpc error: code = NotFound desc = an error occurred when try to find container \"50550b578b3201a56ee75fec197e351620b95e049360b4110af9ed5bfe16b329\": not found" Jan 23 23:58:29.870292 kubelet[3417]: I0123 23:58:29.870286 3417 scope.go:117] "RemoveContainer" containerID="994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452" Jan 23 23:58:29.870805 containerd[2027]: time="2026-01-23T23:58:29.870681194Z" level=error msg="ContainerStatus for \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\": not found" Jan 23 23:58:29.871030 kubelet[3417]: E0123 23:58:29.870907 3417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\": not found" containerID="994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452" Jan 23 23:58:29.871030 kubelet[3417]: I0123 23:58:29.870947 3417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452"} err="failed to get container status \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\": rpc error: code = NotFound desc = an error occurred when try to find container \"994cf3838b595c80061871b1b0da2c0be7c9d18fba461018de5bf0a549b2d452\": not found" Jan 23 23:58:29.871030 kubelet[3417]: I0123 23:58:29.870979 3417 scope.go:117] "RemoveContainer" containerID="1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543" Jan 23 23:58:29.871549 containerd[2027]: time="2026-01-23T23:58:29.871287662Z" level=error msg="ContainerStatus for \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\": not found" Jan 23 23:58:29.872030 kubelet[3417]: E0123 23:58:29.871759 3417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\": not found" containerID="1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543" Jan 23 23:58:29.872030 kubelet[3417]: I0123 23:58:29.871806 3417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543"} err="failed to get container status \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ddaeef2e9992f2d22507b0b23b3f0aaaefe2948eaacded4590d96fa12ecc543\": not found" Jan 23 23:58:29.872030 kubelet[3417]: I0123 23:58:29.871838 3417 scope.go:117] "RemoveContainer" containerID="6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0" Jan 23 23:58:29.872574 containerd[2027]: time="2026-01-23T23:58:29.872460674Z" level=error msg="ContainerStatus for \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\": not found" Jan 23 23:58:29.872705 kubelet[3417]: E0123 23:58:29.872666 3417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\": not found" containerID="6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0" Jan 23 23:58:29.872817 kubelet[3417]: I0123 23:58:29.872715 3417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0"} err="failed to get container status \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\": rpc error: code = NotFound desc = an error occurred when try to find container \"6228bd07f99c4e27bdc2ae45d5937bc6236bb9bb3aa68f75db09ae84429e4eb0\": not found" Jan 23 23:58:29.872817 kubelet[3417]: I0123 23:58:29.872746 3417 scope.go:117] "RemoveContainer" containerID="30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a" Jan 23 23:58:29.873261 containerd[2027]: time="2026-01-23T23:58:29.873030614Z" level=error msg="ContainerStatus for \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\": not found" Jan 23 23:58:29.873412 kubelet[3417]: E0123 23:58:29.873345 3417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\": not found" containerID="30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a" Jan 23 23:58:29.873483 kubelet[3417]: I0123 23:58:29.873425 3417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a"} err="failed to get container status \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\": rpc error: code = NotFound desc = an error occurred when try to find container \"30aa7754ff5e044944072029b3ef80be4401120832b2fbeecc3e0295b8ef741a\": not found" Jan 23 23:58:29.873569 kubelet[3417]: I0123 23:58:29.873478 3417 scope.go:117] "RemoveContainer" containerID="a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06" Jan 23 23:58:29.875498 containerd[2027]: time="2026-01-23T23:58:29.875324378Z" level=info msg="RemoveContainer for \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\"" Jan 23 23:58:29.881475 containerd[2027]: time="2026-01-23T23:58:29.881395838Z" level=info msg="RemoveContainer for \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\" returns successfully" Jan 23 23:58:29.881882 kubelet[3417]: I0123 23:58:29.881779 3417 scope.go:117] "RemoveContainer" containerID="a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06" Jan 23 23:58:29.882478 containerd[2027]: time="2026-01-23T23:58:29.882351038Z" level=error msg="ContainerStatus for \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\": not found" Jan 23 23:58:29.882765 kubelet[3417]: E0123 23:58:29.882602 3417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\": not found" containerID="a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06" Jan 23 23:58:29.882765 kubelet[3417]: I0123 23:58:29.882644 3417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06"} err="failed to get container status \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0307767ff961e63dc304f4dda23a88b0485e586cae4264e38b0207ed1671b06\": not found" Jan 23 23:58:30.268238 kubelet[3417]: I0123 23:58:30.267531 3417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e90d03f-b3a7-42c9-8360-7fd9e1863b90" path="/var/lib/kubelet/pods/8e90d03f-b3a7-42c9-8360-7fd9e1863b90/volumes" Jan 23 23:58:30.269147 kubelet[3417]: I0123 23:58:30.269113 3417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e792fa-7fa3-4407-8794-6c13475b955d" path="/var/lib/kubelet/pods/e8e792fa-7fa3-4407-8794-6c13475b955d/volumes" Jan 23 23:58:30.668835 sshd[5012]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:30.675530 systemd[1]: sshd@23-172.31.27.234:22-4.153.228.146:52830.service: Deactivated successfully. Jan 23 23:58:30.679576 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:58:30.679968 systemd[1]: session-24.scope: Consumed 2.642s CPU time. Jan 23 23:58:30.681366 systemd-logind[1997]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:58:30.683561 systemd-logind[1997]: Removed session 24. Jan 23 23:58:30.779417 systemd[1]: Started sshd@24-172.31.27.234:22-4.153.228.146:52836.service - OpenSSH per-connection server daemon (4.153.228.146:52836). Jan 23 23:58:31.179648 ntpd[1991]: Deleting interface #12 lxc_health, fe80::50c1:36ff:fe3d:1afb%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jan 23 23:58:31.180136 ntpd[1991]: 23 Jan 23:58:31 ntpd[1991]: Deleting interface #12 lxc_health, fe80::50c1:36ff:fe3d:1afb%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jan 23 23:58:31.325251 sshd[5180]: Accepted publickey for core from 4.153.228.146 port 52836 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:31.327982 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:31.336500 systemd-logind[1997]: New session 25 of user core. Jan 23 23:58:31.341557 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:58:33.029750 systemd[1]: Created slice kubepods-burstable-pod3c4fdf3b_9108_47db_959a_5e158b8057b1.slice - libcontainer container kubepods-burstable-pod3c4fdf3b_9108_47db_959a_5e158b8057b1.slice. Jan 23 23:58:33.039073 sshd[5180]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:33.049107 systemd[1]: sshd@24-172.31.27.234:22-4.153.228.146:52836.service: Deactivated successfully. Jan 23 23:58:33.056461 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:58:33.058546 systemd[1]: session-25.scope: Consumed 1.233s CPU time. Jan 23 23:58:33.065030 systemd-logind[1997]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:58:33.069066 systemd-logind[1997]: Removed session 25. Jan 23 23:58:33.122612 kubelet[3417]: I0123 23:58:33.120716 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-etc-cni-netd\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.122612 kubelet[3417]: I0123 23:58:33.120794 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-lib-modules\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.122612 kubelet[3417]: I0123 23:58:33.120839 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-hostproc\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.122612 kubelet[3417]: I0123 23:58:33.120878 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c4fdf3b-9108-47db-959a-5e158b8057b1-cilium-config-path\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.122612 kubelet[3417]: I0123 23:58:33.120914 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbf9n\" (UniqueName: \"kubernetes.io/projected/3c4fdf3b-9108-47db-959a-5e158b8057b1-kube-api-access-zbf9n\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.122612 kubelet[3417]: I0123 23:58:33.120953 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-xtables-lock\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.123599 kubelet[3417]: I0123 23:58:33.120986 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-host-proc-sys-net\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.123599 kubelet[3417]: I0123 23:58:33.121021 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-host-proc-sys-kernel\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.123599 kubelet[3417]: I0123 23:58:33.121057 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c4fdf3b-9108-47db-959a-5e158b8057b1-clustermesh-secrets\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.123599 kubelet[3417]: I0123 23:58:33.121092 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c4fdf3b-9108-47db-959a-5e158b8057b1-cilium-ipsec-secrets\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.123599 kubelet[3417]: I0123 23:58:33.121133 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-cni-path\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.123599 kubelet[3417]: I0123 23:58:33.121171 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-cilium-run\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.124378 kubelet[3417]: I0123 23:58:33.121205 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-cilium-cgroup\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.124378 kubelet[3417]: I0123 23:58:33.124035 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c4fdf3b-9108-47db-959a-5e158b8057b1-bpf-maps\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.124378 kubelet[3417]: I0123 23:58:33.124079 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c4fdf3b-9108-47db-959a-5e158b8057b1-hubble-tls\") pod \"cilium-zfm25\" (UID: \"3c4fdf3b-9108-47db-959a-5e158b8057b1\") " pod="kube-system/cilium-zfm25" Jan 23 23:58:33.132425 systemd[1]: Started sshd@25-172.31.27.234:22-4.153.228.146:52852.service - OpenSSH per-connection server daemon (4.153.228.146:52852). Jan 23 23:58:33.343993 containerd[2027]: time="2026-01-23T23:58:33.342643563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfm25,Uid:3c4fdf3b-9108-47db-959a-5e158b8057b1,Namespace:kube-system,Attempt:0,}" Jan 23 23:58:33.389512 containerd[2027]: time="2026-01-23T23:58:33.387743319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:33.389512 containerd[2027]: time="2026-01-23T23:58:33.387859227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:33.389512 containerd[2027]: time="2026-01-23T23:58:33.387898131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:33.389512 containerd[2027]: time="2026-01-23T23:58:33.388048935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:33.437550 systemd[1]: Started cri-containerd-3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a.scope - libcontainer container 3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a. Jan 23 23:58:33.481347 containerd[2027]: time="2026-01-23T23:58:33.480106744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfm25,Uid:3c4fdf3b-9108-47db-959a-5e158b8057b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\"" Jan 23 23:58:33.492182 containerd[2027]: time="2026-01-23T23:58:33.492129400Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 23:58:33.514666 kubelet[3417]: E0123 23:58:33.514547 3417 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 23:58:33.517593 containerd[2027]: time="2026-01-23T23:58:33.517513984Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2\"" Jan 23 23:58:33.520300 containerd[2027]: time="2026-01-23T23:58:33.519417892Z" level=info msg="StartContainer for \"b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2\"" Jan 23 23:58:33.565521 systemd[1]: Started cri-containerd-b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2.scope - libcontainer container b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2. Jan 23 23:58:33.623069 containerd[2027]: time="2026-01-23T23:58:33.622249181Z" level=info msg="StartContainer for \"b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2\" returns successfully" Jan 23 23:58:33.641504 systemd[1]: cri-containerd-b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2.scope: Deactivated successfully. Jan 23 23:58:33.647613 sshd[5192]: Accepted publickey for core from 4.153.228.146 port 52852 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:33.653651 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:33.663581 systemd-logind[1997]: New session 26 of user core. Jan 23 23:58:33.671644 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 23:58:33.720668 containerd[2027]: time="2026-01-23T23:58:33.720555533Z" level=info msg="shim disconnected" id=b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2 namespace=k8s.io Jan 23 23:58:33.720926 containerd[2027]: time="2026-01-23T23:58:33.720688037Z" level=warning msg="cleaning up after shim disconnected" id=b625c0aa60eef18e93c52472013aa5b8de21fc09cd80322e073c3467ecd0bdc2 namespace=k8s.io Jan 23 23:58:33.720926 containerd[2027]: time="2026-01-23T23:58:33.720711929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:33.801237 containerd[2027]: time="2026-01-23T23:58:33.801143273Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 23:58:33.823542 containerd[2027]: time="2026-01-23T23:58:33.823461870Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64\"" Jan 23 23:58:33.825759 containerd[2027]: time="2026-01-23T23:58:33.825081318Z" level=info msg="StartContainer for \"1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64\"" Jan 23 23:58:33.876567 systemd[1]: Started cri-containerd-1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64.scope - libcontainer container 1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64. Jan 23 23:58:33.931772 containerd[2027]: time="2026-01-23T23:58:33.931567794Z" level=info msg="StartContainer for \"1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64\" returns successfully" Jan 23 23:58:33.949460 systemd[1]: cri-containerd-1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64.scope: Deactivated successfully. Jan 23 23:58:33.995383 containerd[2027]: time="2026-01-23T23:58:33.995149242Z" level=info msg="shim disconnected" id=1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64 namespace=k8s.io Jan 23 23:58:33.995383 containerd[2027]: time="2026-01-23T23:58:33.995293062Z" level=warning msg="cleaning up after shim disconnected" id=1f7e85ef9e67a4cfef92bf1b6e38719280d07d7984bc711d6dd973c83bcbbf64 namespace=k8s.io Jan 23 23:58:33.995383 containerd[2027]: time="2026-01-23T23:58:33.995316426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:33.998240 sshd[5192]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:34.005998 systemd[1]: sshd@25-172.31.27.234:22-4.153.228.146:52852.service: Deactivated successfully. Jan 23 23:58:34.011045 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 23:58:34.017828 systemd-logind[1997]: Session 26 logged out. Waiting for processes to exit. Jan 23 23:58:34.022016 systemd-logind[1997]: Removed session 26. Jan 23 23:58:34.090745 systemd[1]: Started sshd@26-172.31.27.234:22-4.153.228.146:52868.service - OpenSSH per-connection server daemon (4.153.228.146:52868). Jan 23 23:58:34.596839 sshd[5371]: Accepted publickey for core from 4.153.228.146 port 52868 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:34.599565 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:34.610250 systemd-logind[1997]: New session 27 of user core. Jan 23 23:58:34.619481 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 23:58:34.804873 containerd[2027]: time="2026-01-23T23:58:34.804547278Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 23:58:34.852343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807080229.mount: Deactivated successfully. Jan 23 23:58:34.854546 containerd[2027]: time="2026-01-23T23:58:34.852982411Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132\"" Jan 23 23:58:34.860323 containerd[2027]: time="2026-01-23T23:58:34.858959491Z" level=info msg="StartContainer for \"ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132\"" Jan 23 23:58:34.955453 systemd[1]: Started cri-containerd-ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132.scope - libcontainer container ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132. Jan 23 23:58:35.045279 containerd[2027]: time="2026-01-23T23:58:35.045151480Z" level=info msg="StartContainer for \"ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132\" returns successfully" Jan 23 23:58:35.069540 systemd[1]: cri-containerd-ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132.scope: Deactivated successfully. Jan 23 23:58:35.115636 containerd[2027]: time="2026-01-23T23:58:35.115166296Z" level=info msg="shim disconnected" id=ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132 namespace=k8s.io Jan 23 23:58:35.115636 containerd[2027]: time="2026-01-23T23:58:35.115285168Z" level=warning msg="cleaning up after shim disconnected" id=ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132 namespace=k8s.io Jan 23 23:58:35.115636 containerd[2027]: time="2026-01-23T23:58:35.115305688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:35.135872 containerd[2027]: time="2026-01-23T23:58:35.135735868Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:58:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:58:35.236194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed4df8e7c8e3a6e8ef06f549d1319d6bed59ba08b1c01bc4acbab7d1fd927132-rootfs.mount: Deactivated successfully. Jan 23 23:58:35.263602 kubelet[3417]: E0123 23:58:35.263527 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-vbd5w" podUID="68e33b05-d007-4c43-9132-23af17f73307" Jan 23 23:58:35.821064 containerd[2027]: time="2026-01-23T23:58:35.820937599Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 23:58:35.860838 containerd[2027]: time="2026-01-23T23:58:35.860633372Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f\"" Jan 23 23:58:35.863708 containerd[2027]: time="2026-01-23T23:58:35.863358308Z" level=info msg="StartContainer for \"9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f\"" Jan 23 23:58:35.941580 systemd[1]: Started cri-containerd-9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f.scope - libcontainer container 9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f. Jan 23 23:58:36.042271 containerd[2027]: time="2026-01-23T23:58:36.040866401Z" level=info msg="StartContainer for \"9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f\" returns successfully" Jan 23 23:58:36.043776 systemd[1]: cri-containerd-9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f.scope: Deactivated successfully. Jan 23 23:58:36.108362 containerd[2027]: time="2026-01-23T23:58:36.108023741Z" level=info msg="shim disconnected" id=9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f namespace=k8s.io Jan 23 23:58:36.108362 containerd[2027]: time="2026-01-23T23:58:36.108111413Z" level=warning msg="cleaning up after shim disconnected" id=9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f namespace=k8s.io Jan 23 23:58:36.108362 containerd[2027]: time="2026-01-23T23:58:36.108134513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:36.236367 systemd[1]: run-containerd-runc-k8s.io-9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f-runc.3GPWic.mount: Deactivated successfully. Jan 23 23:58:36.236544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c39149257991f270209ae0fe9101d4cc2c1374d28f98b42de74933f2e68b77f-rootfs.mount: Deactivated successfully. Jan 23 23:58:36.823611 containerd[2027]: time="2026-01-23T23:58:36.823017092Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 23:58:36.859728 containerd[2027]: time="2026-01-23T23:58:36.859363461Z" level=info msg="CreateContainer within sandbox \"3e93ec4d2ce2206b1f4fbf2a9e1cd8422586b03703db3ccd818ea1a19d32181a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0\"" Jan 23 23:58:36.860503 containerd[2027]: time="2026-01-23T23:58:36.860443113Z" level=info msg="StartContainer for \"5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0\"" Jan 23 23:58:36.927542 systemd[1]: Started cri-containerd-5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0.scope - libcontainer container 5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0. Jan 23 23:58:36.986858 containerd[2027]: time="2026-01-23T23:58:36.986786085Z" level=info msg="StartContainer for \"5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0\" returns successfully" Jan 23 23:58:37.236358 systemd[1]: run-containerd-runc-k8s.io-5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0-runc.5MZlHf.mount: Deactivated successfully. Jan 23 23:58:37.263317 kubelet[3417]: E0123 23:58:37.263198 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-vbd5w" podUID="68e33b05-d007-4c43-9132-23af17f73307" Jan 23 23:58:37.816257 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 23:58:38.291530 containerd[2027]: time="2026-01-23T23:58:38.291480764Z" level=info msg="StopPodSandbox for \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\"" Jan 23 23:58:38.292410 containerd[2027]: time="2026-01-23T23:58:38.292256168Z" level=info msg="TearDown network for sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" successfully" Jan 23 23:58:38.292410 containerd[2027]: time="2026-01-23T23:58:38.292294232Z" level=info msg="StopPodSandbox for \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" returns successfully" Jan 23 23:58:38.293364 containerd[2027]: time="2026-01-23T23:58:38.293293628Z" level=info msg="RemovePodSandbox for \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\"" Jan 23 23:58:38.293468 containerd[2027]: time="2026-01-23T23:58:38.293373344Z" level=info msg="Forcibly stopping sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\"" Jan 23 23:58:38.293573 containerd[2027]: time="2026-01-23T23:58:38.293531012Z" level=info msg="TearDown network for sandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" successfully" Jan 23 23:58:38.300401 containerd[2027]: time="2026-01-23T23:58:38.300311648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:38.300563 containerd[2027]: time="2026-01-23T23:58:38.300410528Z" level=info msg="RemovePodSandbox \"4f2f24e9f806dee2f95caf40f200b10db943e13055436e21e96b099d9ae8ae9c\" returns successfully" Jan 23 23:58:38.301146 containerd[2027]: time="2026-01-23T23:58:38.301079744Z" level=info msg="StopPodSandbox for \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\"" Jan 23 23:58:38.301349 containerd[2027]: time="2026-01-23T23:58:38.301276196Z" level=info msg="TearDown network for sandbox \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" successfully" Jan 23 23:58:38.301349 containerd[2027]: time="2026-01-23T23:58:38.301314032Z" level=info msg="StopPodSandbox for \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" returns successfully" Jan 23 23:58:38.302496 containerd[2027]: time="2026-01-23T23:58:38.301858328Z" level=info msg="RemovePodSandbox for \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\"" Jan 23 23:58:38.302496 containerd[2027]: time="2026-01-23T23:58:38.301900616Z" level=info msg="Forcibly stopping sandbox \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\"" Jan 23 23:58:38.302496 containerd[2027]: time="2026-01-23T23:58:38.301992632Z" level=info msg="TearDown network for sandbox \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" successfully" Jan 23 23:58:38.308891 containerd[2027]: time="2026-01-23T23:58:38.308788352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:38.308891 containerd[2027]: time="2026-01-23T23:58:38.308885936Z" level=info msg="RemovePodSandbox \"8dd7bb0741f7fe664c89ccf02beacbfe2f6521fe55d3e2351a68088c4d6aec3e\" returns successfully" Jan 23 23:58:42.278106 systemd-networkd[1917]: lxc_health: Link UP Jan 23 23:58:42.290503 (udev-worker)[6050]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:58:42.293800 systemd-networkd[1917]: lxc_health: Gained carrier Jan 23 23:58:43.393506 kubelet[3417]: I0123 23:58:43.393356 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zfm25" podStartSLOduration=11.393329533 podStartE2EDuration="11.393329533s" podCreationTimestamp="2026-01-23 23:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:58:37.86401795 +0000 UTC m=+119.890834353" watchObservedRunningTime="2026-01-23 23:58:43.393329533 +0000 UTC m=+125.420145924" Jan 23 23:58:43.902594 systemd[1]: run-containerd-runc-k8s.io-5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0-runc.zHNtEg.mount: Deactivated successfully. Jan 23 23:58:44.325024 systemd-networkd[1917]: lxc_health: Gained IPv6LL Jan 23 23:58:46.243490 systemd[1]: run-containerd-runc-k8s.io-5efd4944743cab863fd0f8f72fcbd6576c0070c24ef1aabd90a840e4d1a66ce0-runc.VdFWxQ.mount: Deactivated successfully. Jan 23 23:58:47.179740 ntpd[1991]: Listen normally on 15 lxc_health [fe80::6c1d:c1ff:fe72:aad1%14]:123 Jan 23 23:58:47.181331 ntpd[1991]: 23 Jan 23:58:47 ntpd[1991]: Listen normally on 15 lxc_health [fe80::6c1d:c1ff:fe72:aad1%14]:123 Jan 23 23:58:48.657265 kubelet[3417]: E0123 23:58:48.657179 3417 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57840->127.0.0.1:45343: write tcp 127.0.0.1:57840->127.0.0.1:45343: write: broken pipe Jan 23 23:58:48.738815 sshd[5371]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:48.746956 systemd[1]: sshd@26-172.31.27.234:22-4.153.228.146:52868.service: Deactivated successfully. Jan 23 23:58:48.755294 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 23:58:48.760928 systemd-logind[1997]: Session 27 logged out. Waiting for processes to exit. Jan 23 23:58:48.765359 systemd-logind[1997]: Removed session 27. Jan 23 23:59:26.764249 systemd[1]: cri-containerd-17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a.scope: Deactivated successfully. Jan 23 23:59:26.765619 systemd[1]: cri-containerd-17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a.scope: Consumed 4.925s CPU time, 24.3M memory peak, 0B memory swap peak. Jan 23 23:59:26.808747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a-rootfs.mount: Deactivated successfully. Jan 23 23:59:26.819548 containerd[2027]: time="2026-01-23T23:59:26.819449025Z" level=info msg="shim disconnected" id=17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a namespace=k8s.io Jan 23 23:59:26.819548 containerd[2027]: time="2026-01-23T23:59:26.819531849Z" level=warning msg="cleaning up after shim disconnected" id=17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a namespace=k8s.io Jan 23 23:59:26.820308 containerd[2027]: time="2026-01-23T23:59:26.819553941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:26.965494 kubelet[3417]: I0123 23:59:26.965429 3417 scope.go:117] "RemoveContainer" containerID="17e7b1c7b1de9f60cc97422c2c1b0092cbc9da648b3c3436999e20c7971e706a" Jan 23 23:59:26.969092 containerd[2027]: time="2026-01-23T23:59:26.968886970Z" level=info msg="CreateContainer within sandbox \"12bfad05ce3db81d270313b691fd3521df6e0d10f2ccc9d99d5c35e285c4c9f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:59:26.993196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560284045.mount: Deactivated successfully. Jan 23 23:59:26.997934 containerd[2027]: time="2026-01-23T23:59:26.997855438Z" level=info msg="CreateContainer within sandbox \"12bfad05ce3db81d270313b691fd3521df6e0d10f2ccc9d99d5c35e285c4c9f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3ce232fac9799279b56b680091f7de22cb5983d0a9bea2653fd3951e41c14e28\"" Jan 23 23:59:26.998598 containerd[2027]: time="2026-01-23T23:59:26.998559646Z" level=info msg="StartContainer for \"3ce232fac9799279b56b680091f7de22cb5983d0a9bea2653fd3951e41c14e28\"" Jan 23 23:59:27.055098 systemd[1]: Started cri-containerd-3ce232fac9799279b56b680091f7de22cb5983d0a9bea2653fd3951e41c14e28.scope - libcontainer container 3ce232fac9799279b56b680091f7de22cb5983d0a9bea2653fd3951e41c14e28. Jan 23 23:59:27.122913 containerd[2027]: time="2026-01-23T23:59:27.122551878Z" level=info msg="StartContainer for \"3ce232fac9799279b56b680091f7de22cb5983d0a9bea2653fd3951e41c14e28\" returns successfully" Jan 23 23:59:30.679401 kubelet[3417]: E0123 23:59:30.678665 3417 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-234?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:59:31.617601 systemd[1]: cri-containerd-d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d.scope: Deactivated successfully. Jan 23 23:59:31.618625 systemd[1]: cri-containerd-d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d.scope: Consumed 6.756s CPU time, 16.3M memory peak, 0B memory swap peak. Jan 23 23:59:31.659197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d-rootfs.mount: Deactivated successfully. Jan 23 23:59:31.672462 containerd[2027]: time="2026-01-23T23:59:31.672384865Z" level=info msg="shim disconnected" id=d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d namespace=k8s.io Jan 23 23:59:31.673073 containerd[2027]: time="2026-01-23T23:59:31.672977857Z" level=warning msg="cleaning up after shim disconnected" id=d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d namespace=k8s.io Jan 23 23:59:31.673073 containerd[2027]: time="2026-01-23T23:59:31.673008757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:31.985076 kubelet[3417]: I0123 23:59:31.984733 3417 scope.go:117] "RemoveContainer" containerID="d0705cc27a9140f555ce4e503225ba3a22d5de749738ddfa194829fc5a87576d" Jan 23 23:59:31.987982 containerd[2027]: time="2026-01-23T23:59:31.987902582Z" level=info msg="CreateContainer within sandbox \"559d1fdcedee2aa535d74480967d11d8b93a716885a49f3ee988be825f325d6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:59:32.016036 containerd[2027]: time="2026-01-23T23:59:32.015933059Z" level=info msg="CreateContainer within sandbox \"559d1fdcedee2aa535d74480967d11d8b93a716885a49f3ee988be825f325d6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d7e3266baf1223ae144b86bedfba81760138096136671b9868a069d8e5b01364\"" Jan 23 23:59:32.016879 containerd[2027]: time="2026-01-23T23:59:32.016788791Z" level=info msg="StartContainer for \"d7e3266baf1223ae144b86bedfba81760138096136671b9868a069d8e5b01364\"" Jan 23 23:59:32.069545 systemd[1]: Started cri-containerd-d7e3266baf1223ae144b86bedfba81760138096136671b9868a069d8e5b01364.scope - libcontainer container d7e3266baf1223ae144b86bedfba81760138096136671b9868a069d8e5b01364. Jan 23 23:59:32.147757 containerd[2027]: time="2026-01-23T23:59:32.147699167Z" level=info msg="StartContainer for \"d7e3266baf1223ae144b86bedfba81760138096136671b9868a069d8e5b01364\" returns successfully" Jan 23 23:59:40.679658 kubelet[3417]: E0123 23:59:40.679571 3417 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-234?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"