Apr 24 23:36:52.241854 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 24 23:36:52.241900 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 24 22:19:35 -00 2026 Apr 24 23:36:52.241926 kernel: KASLR disabled due to lack of seed Apr 24 23:36:52.241943 kernel: efi: EFI v2.7 by EDK II Apr 24 23:36:52.241959 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 24 23:36:52.241975 kernel: ACPI: Early table checksum verification disabled Apr 24 23:36:52.241993 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 24 23:36:52.242009 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 24 23:36:52.242026 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 24 23:36:52.242041 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 24 23:36:52.242063 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 24 23:36:52.242079 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 24 23:36:52.242095 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 24 23:36:52.242111 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 24 23:36:52.242130 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 24 23:36:52.242202 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 24 23:36:52.242221 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 24 23:36:52.242238 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 24 23:36:52.242255 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 24 23:36:52.242272 kernel: printk: bootconsole [uart0] enabled Apr 24 23:36:52.242288 kernel: NUMA: Failed to initialise from firmware Apr 24 23:36:52.242305 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 24 23:36:52.242322 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 24 23:36:52.242338 kernel: Zone ranges: Apr 24 23:36:52.242355 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 24 23:36:52.242371 kernel: DMA32 empty Apr 24 23:36:52.242392 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 24 23:36:52.242409 kernel: Movable zone start for each node Apr 24 23:36:52.242426 kernel: Early memory node ranges Apr 24 23:36:52.242443 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 24 23:36:52.242459 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 24 23:36:52.242476 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 24 23:36:52.242492 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 24 23:36:52.242509 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 24 23:36:52.242526 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 24 23:36:52.242542 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 24 23:36:52.242558 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 24 23:36:52.242575 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 24 23:36:52.242596 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 24 23:36:52.242613 kernel: psci: probing for conduit method from ACPI. Apr 24 23:36:52.242637 kernel: psci: PSCIv1.0 detected in firmware. Apr 24 23:36:52.242654 kernel: psci: Using standard PSCI v0.2 function IDs Apr 24 23:36:52.242672 kernel: psci: Trusted OS migration not required Apr 24 23:36:52.242694 kernel: psci: SMC Calling Convention v1.1 Apr 24 23:36:52.242712 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 24 23:36:52.242730 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 24 23:36:52.242747 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 24 23:36:52.242765 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 24 23:36:52.242783 kernel: Detected PIPT I-cache on CPU0 Apr 24 23:36:52.242801 kernel: CPU features: detected: GIC system register CPU interface Apr 24 23:36:52.242818 kernel: CPU features: detected: Spectre-v2 Apr 24 23:36:52.242836 kernel: CPU features: detected: Spectre-v3a Apr 24 23:36:52.242853 kernel: CPU features: detected: Spectre-BHB Apr 24 23:36:52.242871 kernel: CPU features: detected: ARM erratum 1742098 Apr 24 23:36:52.242893 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 24 23:36:52.242911 kernel: alternatives: applying boot alternatives Apr 24 23:36:52.242931 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:36:52.242949 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:36:52.242967 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:36:52.242985 kernel: Fallback order for Node 0: 0 Apr 24 23:36:52.243003 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 24 23:36:52.243020 kernel: Policy zone: Normal Apr 24 23:36:52.243038 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:36:52.243055 kernel: software IO TLB: area num 2. Apr 24 23:36:52.243073 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 24 23:36:52.243115 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 24 23:36:52.245200 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:36:52.245239 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:36:52.245259 kernel: rcu: RCU event tracing is enabled. Apr 24 23:36:52.245277 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:36:52.245295 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:36:52.245313 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:36:52.245331 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:36:52.245349 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:36:52.245367 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 24 23:36:52.245384 kernel: GICv3: 96 SPIs implemented Apr 24 23:36:52.245410 kernel: GICv3: 0 Extended SPIs implemented Apr 24 23:36:52.245429 kernel: Root IRQ handler: gic_handle_irq Apr 24 23:36:52.245446 kernel: GICv3: GICv3 features: 16 PPIs Apr 24 23:36:52.245464 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 24 23:36:52.245482 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 24 23:36:52.245500 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 24 23:36:52.245518 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 24 23:36:52.245536 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 24 23:36:52.245554 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 24 23:36:52.245572 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 24 23:36:52.245590 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:36:52.245608 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 24 23:36:52.245632 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 24 23:36:52.245651 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 24 23:36:52.245669 kernel: Console: colour dummy device 80x25 Apr 24 23:36:52.245688 kernel: printk: console [tty1] enabled Apr 24 23:36:52.245707 kernel: ACPI: Core revision 20230628 Apr 24 23:36:52.245726 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 24 23:36:52.245745 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:36:52.245763 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:36:52.245782 kernel: landlock: Up and running. Apr 24 23:36:52.245804 kernel: SELinux: Initializing. Apr 24 23:36:52.245823 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:36:52.245841 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:36:52.245859 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:36:52.245878 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:36:52.245896 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:36:52.245914 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:36:52.245932 kernel: Platform MSI: ITS@0x10080000 domain created Apr 24 23:36:52.245950 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 24 23:36:52.245972 kernel: Remapping and enabling EFI services. Apr 24 23:36:52.245990 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:36:52.246008 kernel: Detected PIPT I-cache on CPU1 Apr 24 23:36:52.246026 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 24 23:36:52.246044 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 24 23:36:52.246062 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 24 23:36:52.246081 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:36:52.246098 kernel: SMP: Total of 2 processors activated. Apr 24 23:36:52.246116 kernel: CPU features: detected: 32-bit EL0 Support Apr 24 23:36:52.246160 kernel: CPU features: detected: 32-bit EL1 Support Apr 24 23:36:52.246183 kernel: CPU features: detected: CRC32 instructions Apr 24 23:36:52.246201 kernel: CPU: All CPU(s) started at EL1 Apr 24 23:36:52.246232 kernel: alternatives: applying system-wide alternatives Apr 24 23:36:52.246255 kernel: devtmpfs: initialized Apr 24 23:36:52.246275 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:36:52.246294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:36:52.246312 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:36:52.246331 kernel: SMBIOS 3.0.0 present. Apr 24 23:36:52.246354 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 24 23:36:52.246373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:36:52.246392 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 24 23:36:52.246411 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 24 23:36:52.246430 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 24 23:36:52.246449 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:36:52.246468 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Apr 24 23:36:52.246487 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:36:52.246510 kernel: cpuidle: using governor menu Apr 24 23:36:52.246529 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 24 23:36:52.246547 kernel: ASID allocator initialised with 65536 entries Apr 24 23:36:52.246566 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:36:52.246585 kernel: Serial: AMBA PL011 UART driver Apr 24 23:36:52.246603 kernel: Modules: 17488 pages in range for non-PLT usage Apr 24 23:36:52.246623 kernel: Modules: 509008 pages in range for PLT usage Apr 24 23:36:52.246642 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:36:52.246661 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:36:52.246683 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 24 23:36:52.246703 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 24 23:36:52.246721 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:36:52.246740 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:36:52.246759 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 24 23:36:52.246777 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 24 23:36:52.246796 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:36:52.246815 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:36:52.246833 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:36:52.246856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:36:52.246875 kernel: ACPI: Interpreter enabled Apr 24 23:36:52.246894 kernel: ACPI: Using GIC for interrupt routing Apr 24 23:36:52.246912 kernel: ACPI: MCFG table detected, 1 entries Apr 24 23:36:52.246931 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 24 23:36:52.248821 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:36:52.249071 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 24 23:36:52.249318 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 24 23:36:52.249540 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 24 23:36:52.249752 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 24 23:36:52.249777 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 24 23:36:52.249797 kernel: acpiphp: Slot [1] registered Apr 24 23:36:52.249816 kernel: acpiphp: Slot [2] registered Apr 24 23:36:52.249835 kernel: acpiphp: Slot [3] registered Apr 24 23:36:52.249853 kernel: acpiphp: Slot [4] registered Apr 24 23:36:52.249872 kernel: acpiphp: Slot [5] registered Apr 24 23:36:52.249896 kernel: acpiphp: Slot [6] registered Apr 24 23:36:52.249915 kernel: acpiphp: Slot [7] registered Apr 24 23:36:52.249934 kernel: acpiphp: Slot [8] registered Apr 24 23:36:52.249952 kernel: acpiphp: Slot [9] registered Apr 24 23:36:52.249971 kernel: acpiphp: Slot [10] registered Apr 24 23:36:52.249989 kernel: acpiphp: Slot [11] registered Apr 24 23:36:52.250008 kernel: acpiphp: Slot [12] registered Apr 24 23:36:52.250026 kernel: acpiphp: Slot [13] registered Apr 24 23:36:52.250045 kernel: acpiphp: Slot [14] registered Apr 24 23:36:52.250063 kernel: acpiphp: Slot [15] registered Apr 24 23:36:52.250087 kernel: acpiphp: Slot [16] registered Apr 24 23:36:52.250105 kernel: acpiphp: Slot [17] registered Apr 24 23:36:52.250124 kernel: acpiphp: Slot [18] registered Apr 24 23:36:52.250238 kernel: acpiphp: Slot [19] registered Apr 24 23:36:52.250260 kernel: acpiphp: Slot [20] registered Apr 24 23:36:52.250279 kernel: acpiphp: Slot [21] registered Apr 24 23:36:52.250298 kernel: acpiphp: Slot [22] registered Apr 24 23:36:52.250316 kernel: acpiphp: Slot [23] registered Apr 24 23:36:52.250335 kernel: acpiphp: Slot [24] registered Apr 24 23:36:52.250359 kernel: acpiphp: Slot [25] registered Apr 24 23:36:52.250378 kernel: acpiphp: Slot [26] registered Apr 24 23:36:52.250397 kernel: acpiphp: Slot [27] registered Apr 24 23:36:52.250415 kernel: acpiphp: Slot [28] registered Apr 24 23:36:52.250434 kernel: acpiphp: Slot [29] registered Apr 24 23:36:52.250453 kernel: acpiphp: Slot [30] registered Apr 24 23:36:52.250471 kernel: acpiphp: Slot [31] registered Apr 24 23:36:52.250490 kernel: PCI host bridge to bus 0000:00 Apr 24 23:36:52.250713 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 24 23:36:52.250916 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 24 23:36:52.251131 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 24 23:36:52.252544 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 24 23:36:52.252801 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 24 23:36:52.253036 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 24 23:36:52.253333 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 24 23:36:52.253583 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 24 23:36:52.253900 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 24 23:36:52.254123 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 24 23:36:52.254393 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 24 23:36:52.254614 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 24 23:36:52.254865 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 24 23:36:52.255120 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 24 23:36:52.255467 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 24 23:36:52.255665 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 24 23:36:52.255856 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 24 23:36:52.256042 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 24 23:36:52.256068 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 24 23:36:52.256088 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 24 23:36:52.256107 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 24 23:36:52.256126 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 24 23:36:52.256176 kernel: iommu: Default domain type: Translated Apr 24 23:36:52.256197 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 24 23:36:52.256217 kernel: efivars: Registered efivars operations Apr 24 23:36:52.256236 kernel: vgaarb: loaded Apr 24 23:36:52.256255 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 24 23:36:52.256274 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:36:52.256293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:36:52.256312 kernel: pnp: PnP ACPI init Apr 24 23:36:52.256532 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 24 23:36:52.256565 kernel: pnp: PnP ACPI: found 1 devices Apr 24 23:36:52.256585 kernel: NET: Registered PF_INET protocol family Apr 24 23:36:52.256604 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:36:52.256623 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:36:52.256643 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:36:52.256662 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:36:52.256681 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:36:52.256699 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:36:52.256723 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:36:52.256742 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:36:52.256761 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:36:52.256779 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:36:52.256798 kernel: kvm [1]: HYP mode not available Apr 24 23:36:52.256817 kernel: Initialise system trusted keyrings Apr 24 23:36:52.256835 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:36:52.256854 kernel: Key type asymmetric registered Apr 24 23:36:52.256873 kernel: Asymmetric key parser 'x509' registered Apr 24 23:36:52.256895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 23:36:52.256915 kernel: io scheduler mq-deadline registered Apr 24 23:36:52.256933 kernel: io scheduler kyber registered Apr 24 23:36:52.256952 kernel: io scheduler bfq registered Apr 24 23:36:52.258975 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 24 23:36:52.259021 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 24 23:36:52.259041 kernel: ACPI: button: Power Button [PWRB] Apr 24 23:36:52.259061 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 24 23:36:52.259080 kernel: ACPI: button: Sleep Button [SLPB] Apr 24 23:36:52.259130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:36:52.260601 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 24 23:36:52.260887 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 24 23:36:52.260915 kernel: printk: console [ttyS0] disabled Apr 24 23:36:52.260936 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 24 23:36:52.260956 kernel: printk: console [ttyS0] enabled Apr 24 23:36:52.260975 kernel: printk: bootconsole [uart0] disabled Apr 24 23:36:52.260994 kernel: thunder_xcv, ver 1.0 Apr 24 23:36:52.261013 kernel: thunder_bgx, ver 1.0 Apr 24 23:36:52.261041 kernel: nicpf, ver 1.0 Apr 24 23:36:52.261060 kernel: nicvf, ver 1.0 Apr 24 23:36:52.261455 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 24 23:36:52.261685 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-24T23:36:51 UTC (1777073811) Apr 24 23:36:52.261714 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:36:52.261735 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 24 23:36:52.261754 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 24 23:36:52.261773 kernel: watchdog: Hard watchdog permanently disabled Apr 24 23:36:52.261800 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:36:52.261819 kernel: Segment Routing with IPv6 Apr 24 23:36:52.261838 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:36:52.261856 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:36:52.261875 kernel: Key type dns_resolver registered Apr 24 23:36:52.261894 kernel: registered taskstats version 1 Apr 24 23:36:52.261913 kernel: Loading compiled-in X.509 certificates Apr 24 23:36:52.261932 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 96a6e7da7ac9a3ef656057ccd8e13f251b310c24' Apr 24 23:36:52.261951 kernel: Key type .fscrypt registered Apr 24 23:36:52.261974 kernel: Key type fscrypt-provisioning registered Apr 24 23:36:52.261993 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:36:52.262012 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:36:52.262058 kernel: ima: No architecture policies found Apr 24 23:36:52.262079 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 24 23:36:52.262098 kernel: clk: Disabling unused clocks Apr 24 23:36:52.262116 kernel: Freeing unused kernel memory: 39424K Apr 24 23:36:52.262153 kernel: Run /init as init process Apr 24 23:36:52.262737 kernel: with arguments: Apr 24 23:36:52.262766 kernel: /init Apr 24 23:36:52.262785 kernel: with environment: Apr 24 23:36:52.262803 kernel: HOME=/ Apr 24 23:36:52.262822 kernel: TERM=linux Apr 24 23:36:52.262845 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:36:52.262869 systemd[1]: Detected virtualization amazon. Apr 24 23:36:52.262891 systemd[1]: Detected architecture arm64. Apr 24 23:36:52.262910 systemd[1]: Running in initrd. Apr 24 23:36:52.262935 systemd[1]: No hostname configured, using default hostname. Apr 24 23:36:52.262955 systemd[1]: Hostname set to . Apr 24 23:36:52.262976 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:36:52.262996 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:36:52.263017 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:52.263037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:52.263059 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:36:52.263080 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:36:52.263125 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:36:52.263172 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:36:52.263198 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:36:52.263219 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:36:52.263240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:52.263261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:52.263288 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:36:52.263309 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:36:52.263329 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:36:52.263349 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:36:52.263370 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:36:52.263390 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:36:52.263411 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:36:52.263432 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:36:52.263453 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:52.263478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:52.263499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:52.263519 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:36:52.263539 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:36:52.263560 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:36:52.263581 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:36:52.263601 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:36:52.263621 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:36:52.263642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:36:52.263667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:52.263688 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:36:52.263708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:52.263729 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:36:52.263751 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:36:52.263815 systemd-journald[251]: Collecting audit messages is disabled. Apr 24 23:36:52.263860 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:36:52.263880 kernel: Bridge firewalling registered Apr 24 23:36:52.263905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:52.263926 systemd-journald[251]: Journal started Apr 24 23:36:52.263964 systemd-journald[251]: Runtime Journal (/run/log/journal/ec285c2da6eb51ef387a0b1df91d89c5) is 8.0M, max 75.3M, 67.3M free. Apr 24 23:36:52.212701 systemd-modules-load[252]: Inserted module 'overlay' Apr 24 23:36:52.270282 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:52.254885 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 24 23:36:52.281156 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:36:52.285184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:52.297540 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:52.306371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:36:52.311403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:36:52.316571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:36:52.358757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:52.364397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:52.378164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:52.390530 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:36:52.400199 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:52.414501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:36:52.434796 dracut-cmdline[286]: dracut-dracut-053 Apr 24 23:36:52.440646 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:36:52.496983 systemd-resolved[289]: Positive Trust Anchors: Apr 24 23:36:52.497019 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:36:52.497081 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:36:52.601161 kernel: SCSI subsystem initialized Apr 24 23:36:52.607175 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:36:52.620181 kernel: iscsi: registered transport (tcp) Apr 24 23:36:52.642235 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:36:52.642308 kernel: QLogic iSCSI HBA Driver Apr 24 23:36:52.731161 kernel: random: crng init done Apr 24 23:36:52.731454 systemd-resolved[289]: Defaulting to hostname 'linux'. Apr 24 23:36:52.735482 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:36:52.738238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:52.766644 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:36:52.777645 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:36:52.814721 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:36:52.814808 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:36:52.814836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:36:52.882205 kernel: raid6: neonx8 gen() 6723 MB/s Apr 24 23:36:52.899177 kernel: raid6: neonx4 gen() 6521 MB/s Apr 24 23:36:52.916170 kernel: raid6: neonx2 gen() 5453 MB/s Apr 24 23:36:52.933175 kernel: raid6: neonx1 gen() 3958 MB/s Apr 24 23:36:52.950173 kernel: raid6: int64x8 gen() 3799 MB/s Apr 24 23:36:52.967176 kernel: raid6: int64x4 gen() 3717 MB/s Apr 24 23:36:52.984180 kernel: raid6: int64x2 gen() 3581 MB/s Apr 24 23:36:53.002236 kernel: raid6: int64x1 gen() 2767 MB/s Apr 24 23:36:53.002281 kernel: raid6: using algorithm neonx8 gen() 6723 MB/s Apr 24 23:36:53.021205 kernel: raid6: .... xor() 4877 MB/s, rmw enabled Apr 24 23:36:53.021257 kernel: raid6: using neon recovery algorithm Apr 24 23:36:53.029175 kernel: xor: measuring software checksum speed Apr 24 23:36:53.031621 kernel: 8regs : 10277 MB/sec Apr 24 23:36:53.031655 kernel: 32regs : 11919 MB/sec Apr 24 23:36:53.032966 kernel: arm64_neon : 9547 MB/sec Apr 24 23:36:53.032998 kernel: xor: using function: 32regs (11919 MB/sec) Apr 24 23:36:53.118181 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:36:53.138282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:36:53.149560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:53.191941 systemd-udevd[472]: Using default interface naming scheme 'v255'. Apr 24 23:36:53.200700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:53.219438 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:36:53.255065 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Apr 24 23:36:53.312269 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:36:53.325442 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:36:53.446530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:53.462583 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:36:53.506937 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:36:53.514227 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:36:53.514791 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:53.522006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:36:53.534434 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:36:53.583594 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:36:53.645525 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 24 23:36:53.645600 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 24 23:36:53.667154 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 24 23:36:53.667601 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 24 23:36:53.659524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:36:53.694251 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:7a:f1:e3:f9:e9 Apr 24 23:36:53.694593 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 24 23:36:53.694624 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 24 23:36:53.659761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:53.668275 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:53.674227 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:53.674530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:53.682519 (udev-worker)[543]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:36:53.686485 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:53.714205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:53.720174 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 24 23:36:53.734477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:36:53.734553 kernel: GPT:9289727 != 33554431 Apr 24 23:36:53.734590 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:36:53.736553 kernel: GPT:9289727 != 33554431 Apr 24 23:36:53.736612 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:36:53.739271 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:53.747503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:53.761525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:36:53.809911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:53.863168 kernel: BTRFS: device fsid 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (521) Apr 24 23:36:53.868327 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (518) Apr 24 23:36:53.927913 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 24 23:36:53.957904 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 24 23:36:53.988853 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:36:54.002658 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 24 23:36:54.002814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 24 23:36:54.021510 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:36:54.035364 disk-uuid[661]: Primary Header is updated. Apr 24 23:36:54.035364 disk-uuid[661]: Secondary Entries is updated. Apr 24 23:36:54.035364 disk-uuid[661]: Secondary Header is updated. Apr 24 23:36:54.049186 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:54.060167 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:54.067161 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:55.069176 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:36:55.071944 disk-uuid[662]: The operation has completed successfully. Apr 24 23:36:55.243852 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:36:55.244473 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:36:55.311414 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:36:55.322074 sh[1006]: Success Apr 24 23:36:55.345198 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 24 23:36:55.455659 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:36:55.468085 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:36:55.478298 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:36:55.508927 kernel: BTRFS info (device dm-0): first mount of filesystem 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e Apr 24 23:36:55.508989 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:55.509016 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:36:55.510531 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:36:55.511780 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:36:55.577167 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:36:55.597200 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:36:55.601476 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:36:55.616427 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:36:55.626629 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:36:55.650424 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:55.650480 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:55.652484 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:55.660189 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:55.677667 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:36:55.681927 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:55.691681 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:36:55.702669 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:36:55.825493 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:36:55.837460 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:36:55.896052 systemd-networkd[1200]: lo: Link UP Apr 24 23:36:55.896066 systemd-networkd[1200]: lo: Gained carrier Apr 24 23:36:55.898996 systemd-networkd[1200]: Enumeration completed Apr 24 23:36:55.899209 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:36:55.900871 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:55.900878 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:36:55.905584 systemd-networkd[1200]: eth0: Link UP Apr 24 23:36:55.905592 systemd-networkd[1200]: eth0: Gained carrier Apr 24 23:36:55.905612 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:55.908444 systemd[1]: Reached target network.target - Network. Apr 24 23:36:55.932237 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.21.128/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:36:56.062371 ignition[1104]: Ignition 2.19.0 Apr 24 23:36:56.062391 ignition[1104]: Stage: fetch-offline Apr 24 23:36:56.065108 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:56.065164 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:56.065793 ignition[1104]: Ignition finished successfully Apr 24 23:36:56.075492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:36:56.088550 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:36:56.114485 ignition[1210]: Ignition 2.19.0 Apr 24 23:36:56.114505 ignition[1210]: Stage: fetch Apr 24 23:36:56.115161 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:56.115668 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:56.115842 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:56.133868 ignition[1210]: PUT result: OK Apr 24 23:36:56.136509 ignition[1210]: parsed url from cmdline: "" Apr 24 23:36:56.136524 ignition[1210]: no config URL provided Apr 24 23:36:56.136539 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:36:56.136564 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:36:56.136598 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:56.138598 ignition[1210]: PUT result: OK Apr 24 23:36:56.140911 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 24 23:36:56.149510 ignition[1210]: GET result: OK Apr 24 23:36:56.149841 ignition[1210]: parsing config with SHA512: 4daed10ef86238db50d64b4e03b078ec2e8ef9d1dce294c8f4b22707cdf56fa5eab604fceadfa70129c10806b596a66f56c35c909addb30256fccafa3be2ce88 Apr 24 23:36:56.158519 unknown[1210]: fetched base config from "system" Apr 24 23:36:56.158970 unknown[1210]: fetched base config from "system" Apr 24 23:36:56.158985 unknown[1210]: fetched user config from "aws" Apr 24 23:36:56.163265 ignition[1210]: fetch: fetch complete Apr 24 23:36:56.163286 ignition[1210]: fetch: fetch passed Apr 24 23:36:56.163387 ignition[1210]: Ignition finished successfully Apr 24 23:36:56.175203 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:36:56.188536 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:36:56.215971 ignition[1216]: Ignition 2.19.0 Apr 24 23:36:56.215998 ignition[1216]: Stage: kargs Apr 24 23:36:56.217813 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:56.217839 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:56.218621 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:56.219832 ignition[1216]: PUT result: OK Apr 24 23:36:56.230698 ignition[1216]: kargs: kargs passed Apr 24 23:36:56.230849 ignition[1216]: Ignition finished successfully Apr 24 23:36:56.238188 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:36:56.249641 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:36:56.274764 ignition[1222]: Ignition 2.19.0 Apr 24 23:36:56.275325 ignition[1222]: Stage: disks Apr 24 23:36:56.275983 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:56.276008 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:56.276753 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:56.285275 ignition[1222]: PUT result: OK Apr 24 23:36:56.289842 ignition[1222]: disks: disks passed Apr 24 23:36:56.289953 ignition[1222]: Ignition finished successfully Apr 24 23:36:56.294856 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:36:56.299954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:36:56.302502 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:36:56.307108 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:36:56.316890 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:36:56.319202 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:36:56.331445 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:36:56.375338 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:36:56.383248 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:36:56.396497 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:36:56.472184 kernel: EXT4-fs (nvme0n1p9): mounted filesystem edaa698b-3baa-4242-8691-64cb9f35f18f r/w with ordered data mode. Quota mode: none. Apr 24 23:36:56.474492 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:36:56.474933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:36:56.500297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:36:56.504359 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:36:56.512623 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:36:56.516631 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:36:56.516688 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:36:56.536173 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1249) Apr 24 23:36:56.537962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:36:56.543756 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:56.546962 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:56.547024 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:56.550492 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:36:56.558258 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:56.561915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:56.864765 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:36:56.874387 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:36:56.883948 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:36:56.894051 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:36:57.170107 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:57.183365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:36:57.200391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:36:57.218231 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:36:57.222303 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:57.252596 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:36:57.265853 ignition[1363]: INFO : Ignition 2.19.0 Apr 24 23:36:57.272117 ignition[1363]: INFO : Stage: mount Apr 24 23:36:57.274045 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:57.276379 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:57.279198 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:57.283303 ignition[1363]: INFO : PUT result: OK Apr 24 23:36:57.288273 ignition[1363]: INFO : mount: mount passed Apr 24 23:36:57.290047 ignition[1363]: INFO : Ignition finished successfully Apr 24 23:36:57.294205 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:36:57.303302 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:36:57.334363 systemd-networkd[1200]: eth0: Gained IPv6LL Apr 24 23:36:57.335562 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:36:57.363765 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1373) Apr 24 23:36:57.363826 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:36:57.363853 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:36:57.366836 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:36:57.372178 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:36:57.376194 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:57.410900 ignition[1391]: INFO : Ignition 2.19.0 Apr 24 23:36:57.414431 ignition[1391]: INFO : Stage: files Apr 24 23:36:57.414431 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:57.414431 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:57.414431 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:57.414431 ignition[1391]: INFO : PUT result: OK Apr 24 23:36:57.429829 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:36:57.432826 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:36:57.432826 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:36:57.471977 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:36:57.475252 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:36:57.478529 unknown[1391]: wrote ssh authorized keys file for user: core Apr 24 23:36:57.481219 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:36:57.486203 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:36:57.486203 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:36:57.486203 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:36:57.486203 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 24 23:36:57.581009 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:36:57.802672 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:36:57.802672 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:36:57.802672 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 24 23:36:58.118120 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 24 23:36:58.265928 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:58.270090 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 24 23:36:58.719753 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 24 23:36:59.113592 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:36:59.113592 ignition[1391]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:59.120899 ignition[1391]: INFO : files: files passed Apr 24 23:36:59.120899 ignition[1391]: INFO : Ignition finished successfully Apr 24 23:36:59.133487 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:36:59.175627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:36:59.187855 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:36:59.197528 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:36:59.199244 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:36:59.223745 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:59.223745 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:59.230909 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:59.237200 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:59.241162 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:36:59.252589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:36:59.299962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:36:59.300800 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:36:59.305784 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:36:59.308465 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:36:59.313626 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:36:59.325543 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:36:59.354601 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:59.367503 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:36:59.396660 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:59.402342 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:59.405567 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:36:59.407946 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:36:59.408550 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:59.421054 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:36:59.424060 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:36:59.430909 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:36:59.433565 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:36:59.436384 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:36:59.446111 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:36:59.448872 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:36:59.456533 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:36:59.458946 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:36:59.461630 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:36:59.470632 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:36:59.470861 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:36:59.474306 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:59.483827 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:59.486742 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:36:59.489501 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:59.497456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:36:59.497692 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:36:59.500919 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:36:59.501226 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:59.510349 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:36:59.513308 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:36:59.529730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:36:59.540416 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:36:59.547388 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:36:59.547751 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:59.559307 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:36:59.559596 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:36:59.583256 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:36:59.585551 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:36:59.608908 ignition[1442]: INFO : Ignition 2.19.0 Apr 24 23:36:59.608908 ignition[1442]: INFO : Stage: umount Apr 24 23:36:59.615512 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:59.615512 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:36:59.615512 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:36:59.615512 ignition[1442]: INFO : PUT result: OK Apr 24 23:36:59.629937 ignition[1442]: INFO : umount: umount passed Apr 24 23:36:59.629937 ignition[1442]: INFO : Ignition finished successfully Apr 24 23:36:59.636706 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:36:59.637871 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:36:59.643270 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:36:59.649234 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:36:59.649433 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:36:59.652506 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:36:59.652666 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:36:59.655101 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:36:59.655380 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:36:59.664248 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:36:59.664351 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:36:59.666653 systemd[1]: Stopped target network.target - Network. Apr 24 23:36:59.668635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:36:59.668731 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:36:59.671573 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:36:59.678297 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:36:59.680338 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:59.680469 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:36:59.685275 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:36:59.689238 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:36:59.689327 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:36:59.693307 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:36:59.693382 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:36:59.699625 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:36:59.699723 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:36:59.703874 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:36:59.703963 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:36:59.708679 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:36:59.708765 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:59.719405 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:36:59.724750 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:36:59.732880 systemd-networkd[1200]: eth0: DHCPv6 lease lost Apr 24 23:36:59.750246 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:36:59.750492 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:36:59.757129 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:36:59.757611 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:59.777508 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:36:59.787252 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:36:59.787384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:36:59.790562 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:59.794351 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:36:59.794551 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:36:59.822723 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:36:59.822899 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:59.826116 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:36:59.826265 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:59.829665 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:36:59.829748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:59.840832 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:36:59.842199 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:59.861336 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:36:59.861471 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:59.869375 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:36:59.869466 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:59.874119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:36:59.874247 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:36:59.878732 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:36:59.878819 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:36:59.892638 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:36:59.892734 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:59.908669 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:36:59.911205 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:36:59.911312 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:59.911591 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:36:59.911669 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:59.911916 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:36:59.911989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:59.912622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:59.912699 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:59.913814 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:36:59.914012 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:36:59.958646 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:36:59.960193 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:36:59.968897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:36:59.979452 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:37:00.055990 systemd[1]: Switching root. Apr 24 23:37:00.089183 systemd-journald[251]: Journal stopped Apr 24 23:37:02.992718 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 24 23:37:02.992879 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:37:02.992940 kernel: SELinux: policy capability open_perms=1 Apr 24 23:37:02.992983 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:37:02.994839 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:37:02.994894 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:37:02.994928 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:37:02.994968 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:37:02.995001 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:37:02.995045 kernel: audit: type=1403 audit(1777073821.215:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:37:02.995088 systemd[1]: Successfully loaded SELinux policy in 62.419ms. Apr 24 23:37:02.998316 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.910ms. Apr 24 23:37:02.998401 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:37:02.998437 systemd[1]: Detected virtualization amazon. Apr 24 23:37:02.998469 systemd[1]: Detected architecture arm64. Apr 24 23:37:02.998502 systemd[1]: Detected first boot. Apr 24 23:37:02.998543 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:37:02.998579 zram_generator::config[1501]: No configuration found. Apr 24 23:37:02.998617 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:37:02.998652 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:37:02.998686 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 24 23:37:02.998722 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:37:02.998757 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:37:02.998792 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:37:02.998830 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:37:02.998865 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:37:02.998898 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:37:02.998930 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:37:02.998964 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:37:02.998997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:37:02.999052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:37:02.999095 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:37:02.999129 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:37:02.999206 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:37:02.999242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:37:02.999277 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:37:02.999309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:37:02.999341 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:37:02.999375 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:37:02.999407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:37:02.999440 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:37:02.999479 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:37:02.999510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:37:02.999540 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:37:02.999575 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:37:02.999606 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:37:02.999638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:37:02.999669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:37:02.999701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:37:02.999735 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:37:02.999772 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:37:02.999813 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:37:02.999847 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:37:02.999882 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:37:02.999915 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:37:02.999947 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:37:02.999982 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:37:03.000014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:37:03.000045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:37:03.000083 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:37:03.000115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:37:03.002537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:37:03.002599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:37:03.002636 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:37:03.002666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:37:03.002697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:37:03.002729 kernel: ACPI: bus type drm_connector registered Apr 24 23:37:03.002767 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 24 23:37:03.002803 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 24 23:37:03.002836 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:37:03.002865 kernel: loop: module loaded Apr 24 23:37:03.002895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:37:03.002926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:37:03.002955 kernel: fuse: init (API version 7.39) Apr 24 23:37:03.002985 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:37:03.003016 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:37:03.003072 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:37:03.003105 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:37:03.003155 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:37:03.003285 systemd-journald[1616]: Collecting audit messages is disabled. Apr 24 23:37:03.003353 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:37:03.003385 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:37:03.003416 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:37:03.003445 systemd-journald[1616]: Journal started Apr 24 23:37:03.003500 systemd-journald[1616]: Runtime Journal (/run/log/journal/ec285c2da6eb51ef387a0b1df91d89c5) is 8.0M, max 75.3M, 67.3M free. Apr 24 23:37:03.007158 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:37:03.012733 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:37:03.016043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:37:03.019607 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:37:03.019956 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:37:03.023411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:37:03.023850 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:37:03.026985 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:37:03.027514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:37:03.030644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:37:03.030985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:37:03.034829 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:37:03.035406 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:37:03.038636 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:37:03.039075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:37:03.042915 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:37:03.046539 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:37:03.054939 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:37:03.085419 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:37:03.102244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:37:03.117434 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:37:03.125339 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:37:03.146833 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:37:03.158424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:37:03.163297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:37:03.172438 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:37:03.175763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:37:03.196389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:37:03.209576 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:37:03.219739 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:37:03.225658 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:37:03.247867 systemd-journald[1616]: Time spent on flushing to /var/log/journal/ec285c2da6eb51ef387a0b1df91d89c5 is 56.664ms for 893 entries. Apr 24 23:37:03.247867 systemd-journald[1616]: System Journal (/var/log/journal/ec285c2da6eb51ef387a0b1df91d89c5) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:37:03.332491 systemd-journald[1616]: Received client request to flush runtime journal. Apr 24 23:37:03.253668 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:37:03.256981 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:37:03.275999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:37:03.293598 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:37:03.336436 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:37:03.356755 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Apr 24 23:37:03.357337 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Apr 24 23:37:03.360358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:37:03.369040 udevadm[1662]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:37:03.378412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:37:03.390570 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:37:03.453943 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:37:03.469468 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:37:03.512614 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Apr 24 23:37:03.513255 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Apr 24 23:37:03.523760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:37:04.157889 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:37:04.176483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:37:04.224468 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Apr 24 23:37:04.259264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:37:04.282518 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:37:04.325722 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:37:04.391434 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 24 23:37:04.449610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:37:04.484410 (udev-worker)[1683]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:37:04.625214 systemd-networkd[1690]: lo: Link UP Apr 24 23:37:04.625817 systemd-networkd[1690]: lo: Gained carrier Apr 24 23:37:04.629009 systemd-networkd[1690]: Enumeration completed Apr 24 23:37:04.630074 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:37:04.630082 systemd-networkd[1690]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:37:04.630391 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:37:04.638239 systemd-networkd[1690]: eth0: Link UP Apr 24 23:37:04.638593 systemd-networkd[1690]: eth0: Gained carrier Apr 24 23:37:04.638626 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:37:04.643408 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:37:04.653270 systemd-networkd[1690]: eth0: DHCPv4 address 172.31.21.128/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:37:04.686200 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1687) Apr 24 23:37:04.775652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:37:04.923997 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:37:04.954971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:37:04.969401 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:37:04.973202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:37:04.999214 lvm[1807]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:37:05.041888 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:37:05.048097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:37:05.061759 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:37:05.072383 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:37:05.113870 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:37:05.117408 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:37:05.123123 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:37:05.123417 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:37:05.125890 systemd[1]: Reached target machines.target - Containers. Apr 24 23:37:05.130004 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:37:05.138460 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:37:05.145654 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:37:05.152329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:37:05.162642 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:37:05.175457 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:37:05.185650 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:37:05.193412 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:37:05.202542 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:37:05.230206 kernel: loop0: detected capacity change from 0 to 52536 Apr 24 23:37:05.241314 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:37:05.242678 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:37:05.329244 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:37:05.350263 kernel: loop1: detected capacity change from 0 to 114328 Apr 24 23:37:05.431174 kernel: loop2: detected capacity change from 0 to 209336 Apr 24 23:37:05.526192 kernel: loop3: detected capacity change from 0 to 114432 Apr 24 23:37:05.617173 kernel: loop4: detected capacity change from 0 to 52536 Apr 24 23:37:05.644190 kernel: loop5: detected capacity change from 0 to 114328 Apr 24 23:37:05.671250 kernel: loop6: detected capacity change from 0 to 209336 Apr 24 23:37:05.712190 kernel: loop7: detected capacity change from 0 to 114432 Apr 24 23:37:05.734446 (sd-merge)[1834]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 24 23:37:05.735432 (sd-merge)[1834]: Merged extensions into '/usr'. Apr 24 23:37:05.744592 systemd[1]: Reloading requested from client PID 1821 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:37:05.744629 systemd[1]: Reloading... Apr 24 23:37:05.881273 zram_generator::config[1865]: No configuration found. Apr 24 23:37:06.156402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:06.309953 systemd[1]: Reloading finished in 564 ms. Apr 24 23:37:06.348279 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:37:06.363450 systemd[1]: Starting ensure-sysext.service... Apr 24 23:37:06.376421 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:37:06.399376 systemd[1]: Reloading requested from client PID 1919 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:37:06.399410 systemd[1]: Reloading... Apr 24 23:37:06.423285 systemd-networkd[1690]: eth0: Gained IPv6LL Apr 24 23:37:06.438236 systemd-tmpfiles[1920]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:37:06.439849 systemd-tmpfiles[1920]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:37:06.441757 systemd-tmpfiles[1920]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:37:06.442656 systemd-tmpfiles[1920]: ACLs are not supported, ignoring. Apr 24 23:37:06.442899 systemd-tmpfiles[1920]: ACLs are not supported, ignoring. Apr 24 23:37:06.448850 systemd-tmpfiles[1920]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:37:06.449047 systemd-tmpfiles[1920]: Skipping /boot Apr 24 23:37:06.478322 ldconfig[1817]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:37:06.484344 systemd-tmpfiles[1920]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:37:06.484481 systemd-tmpfiles[1920]: Skipping /boot Apr 24 23:37:06.565172 zram_generator::config[1955]: No configuration found. Apr 24 23:37:06.808888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:06.961966 systemd[1]: Reloading finished in 561 ms. Apr 24 23:37:06.991275 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:37:06.994800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:37:07.007123 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:37:07.032607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:37:07.038390 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:37:07.046127 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:37:07.061487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:37:07.080391 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:37:07.096286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:37:07.107943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:37:07.118871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:37:07.126832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:37:07.133984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:37:07.136126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:37:07.140626 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:37:07.174039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:37:07.180825 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:37:07.185765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:37:07.212727 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:37:07.220712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:37:07.229450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:37:07.229842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:37:07.239416 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:37:07.239766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:37:07.249717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:37:07.251512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:37:07.272436 augenrules[2049]: No rules Apr 24 23:37:07.277728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:37:07.288432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:37:07.291706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:37:07.291782 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:37:07.291887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:37:07.291953 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:37:07.304802 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:37:07.311131 systemd[1]: Finished ensure-sysext.service. Apr 24 23:37:07.317489 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:37:07.339338 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:37:07.340672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:37:07.374765 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:37:07.399028 systemd-resolved[2022]: Positive Trust Anchors: Apr 24 23:37:07.399063 systemd-resolved[2022]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:37:07.399127 systemd-resolved[2022]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:37:07.411607 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:37:07.415127 systemd-resolved[2022]: Defaulting to hostname 'linux'. Apr 24 23:37:07.418130 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:37:07.419825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:37:07.423688 systemd[1]: Reached target network.target - Network. Apr 24 23:37:07.425888 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:37:07.428387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:37:07.431089 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:37:07.433598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:37:07.436307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:37:07.439278 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:37:07.441768 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:37:07.444884 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:37:07.447584 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:37:07.447747 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:37:07.449817 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:37:07.452764 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:37:07.457836 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:37:07.464448 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:37:07.474012 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:37:07.476582 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:37:07.479023 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:37:07.481487 systemd[1]: System is tainted: cgroupsv1 Apr 24 23:37:07.481567 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:37:07.481620 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:37:07.485376 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:37:07.498435 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:37:07.506507 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:37:07.513501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:37:07.531458 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:37:07.535346 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:37:07.549240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:07.560329 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:37:07.581520 systemd[1]: Started ntpd.service - Network Time Service. Apr 24 23:37:07.591193 jq[2076]: false Apr 24 23:37:07.595070 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:37:07.628365 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:37:07.637348 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 24 23:37:07.647455 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:37:07.671384 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:37:07.702424 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:37:07.710570 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:37:07.721435 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:37:07.748168 extend-filesystems[2077]: Found loop4 Apr 24 23:37:07.744295 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:37:07.757029 extend-filesystems[2077]: Found loop5 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found loop6 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found loop7 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found nvme0n1 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found nvme0n1p1 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found nvme0n1p2 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found nvme0n1p3 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found usr Apr 24 23:37:07.757029 extend-filesystems[2077]: Found nvme0n1p4 Apr 24 23:37:07.757029 extend-filesystems[2077]: Found nvme0n1p6 Apr 24 23:37:07.821641 extend-filesystems[2077]: Found nvme0n1p7 Apr 24 23:37:07.821641 extend-filesystems[2077]: Found nvme0n1p9 Apr 24 23:37:07.821641 extend-filesystems[2077]: Checking size of /dev/nvme0n1p9 Apr 24 23:37:07.774573 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:37:07.824921 dbus-daemon[2074]: [system] SELinux support is enabled Apr 24 23:37:07.855320 jq[2101]: true Apr 24 23:37:07.775512 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:37:07.853691 dbus-daemon[2074]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1690 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 23:37:07.789500 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:37:07.789994 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:37:07.847889 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:37:07.906100 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:37:07.910131 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:37:07.923896 ntpd[2083]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:50:58 UTC 2026 (1): Starting Apr 24 23:37:07.924299 ntpd[2083]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:50:58 UTC 2026 (1): Starting Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: ---------------------------------------------------- Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: corporation. Support and training for ntp-4 are Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: available at https://www.nwtime.org/support Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: ---------------------------------------------------- Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: proto: precision = 0.096 usec (-23) Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: basedate set to 2026-04-12 Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: gps base set to 2026-04-12 (week 2414) Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Listen normally on 3 eth0 172.31.21.128:123 Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Listen normally on 4 lo [::1]:123 Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Listen normally on 5 eth0 [fe80::47a:f1ff:fee3:f9e9%2]:123 Apr 24 23:37:07.985724 ntpd[2083]: 24 Apr 23:37:07 ntpd[2083]: Listening on routing socket on fd #22 for interface updates Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.968 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.974 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.977 INFO Fetch successful Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.977 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.981 INFO Fetch successful Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.981 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.982 INFO Fetch successful Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.982 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.983 INFO Fetch successful Apr 24 23:37:08.005394 coreos-metadata[2073]: Apr 24 23:37:07.983 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 24 23:37:08.013948 jq[2118]: true Apr 24 23:37:07.924320 ntpd[2083]: ---------------------------------------------------- Apr 24 23:37:07.987805 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:37:08.014674 ntpd[2083]: 24 Apr 23:37:08 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:37:08.014674 ntpd[2083]: 24 Apr 23:37:08 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetch failed with 404: resource not found Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetch successful Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetch successful Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetch successful Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetch successful Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.010 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 24 23:37:08.014790 coreos-metadata[2073]: Apr 24 23:37:08.014 INFO Fetch successful Apr 24 23:37:08.019906 extend-filesystems[2077]: Resized partition /dev/nvme0n1p9 Apr 24 23:37:07.924341 ntpd[2083]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:37:08.009530 (ntainerd)[2128]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:37:07.924360 ntpd[2083]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:37:07.924380 ntpd[2083]: corporation. Support and training for ntp-4 are Apr 24 23:37:07.924418 ntpd[2083]: available at https://www.nwtime.org/support Apr 24 23:37:07.924439 ntpd[2083]: ---------------------------------------------------- Apr 24 23:37:07.927343 ntpd[2083]: proto: precision = 0.096 usec (-23) Apr 24 23:37:07.930858 ntpd[2083]: basedate set to 2026-04-12 Apr 24 23:37:07.930892 ntpd[2083]: gps base set to 2026-04-12 (week 2414) Apr 24 23:37:07.936157 ntpd[2083]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:37:07.936233 ntpd[2083]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:37:07.936491 ntpd[2083]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:37:07.936557 ntpd[2083]: Listen normally on 3 eth0 172.31.21.128:123 Apr 24 23:37:07.936630 ntpd[2083]: Listen normally on 4 lo [::1]:123 Apr 24 23:37:07.936700 ntpd[2083]: Listen normally on 5 eth0 [fe80::47a:f1ff:fee3:f9e9%2]:123 Apr 24 23:37:07.936761 ntpd[2083]: Listening on routing socket on fd #22 for interface updates Apr 24 23:37:08.009388 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:37:08.009441 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:37:08.038032 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:37:08.043165 tar[2108]: linux-arm64/LICENSE Apr 24 23:37:08.043165 tar[2108]: linux-arm64/helm Apr 24 23:37:08.045941 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:37:08.046007 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:37:08.049383 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:37:08.049419 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:37:08.061129 extend-filesystems[2145]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:37:08.071815 dbus-daemon[2074]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 23:37:08.072174 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 24 23:37:08.108740 update_engine[2098]: I20260424 23:37:08.105700 2098 main.cc:92] Flatcar Update Engine starting Apr 24 23:37:08.106523 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 23:37:08.141422 update_engine[2098]: I20260424 23:37:08.135634 2098 update_check_scheduler.cc:74] Next update check in 2m58s Apr 24 23:37:08.129771 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:37:08.133458 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:37:08.142315 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:37:08.147241 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 24 23:37:08.154422 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 24 23:37:08.299806 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:37:08.302602 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:37:08.368187 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 24 23:37:08.391681 amazon-ssm-agent[2157]: Initializing new seelog logger Apr 24 23:37:08.391681 amazon-ssm-agent[2157]: New Seelog Logger Creation Complete Apr 24 23:37:08.391681 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.391681 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.391681 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 processing appconfig overrides Apr 24 23:37:08.393522 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.393522 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.393522 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 processing appconfig overrides Apr 24 23:37:08.393522 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.393522 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.393522 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 processing appconfig overrides Apr 24 23:37:08.409535 extend-filesystems[2145]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 24 23:37:08.409535 extend-filesystems[2145]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 24 23:37:08.409535 extend-filesystems[2145]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 24 23:37:08.427443 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO Proxy environment variables: Apr 24 23:37:08.427443 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.427443 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:37:08.427443 amazon-ssm-agent[2157]: 2026/04/24 23:37:08 processing appconfig overrides Apr 24 23:37:08.412864 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:37:08.428044 extend-filesystems[2077]: Resized filesystem in /dev/nvme0n1p9 Apr 24 23:37:08.414464 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:37:08.456119 bash[2188]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:37:08.465989 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:37:08.503274 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO https_proxy: Apr 24 23:37:08.512845 systemd[1]: Starting sshkeys.service... Apr 24 23:37:08.544102 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:37:08.558788 systemd-logind[2096]: Watching system buttons on /dev/input/event0 (Power Button) Apr 24 23:37:08.558834 systemd-logind[2096]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 24 23:37:08.559431 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:37:08.569704 systemd-logind[2096]: New seat seat0. Apr 24 23:37:08.589497 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:37:08.609270 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO http_proxy: Apr 24 23:37:08.635083 locksmithd[2156]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:37:08.646270 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2207) Apr 24 23:37:08.702132 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO no_proxy: Apr 24 23:37:08.811498 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO Checking if agent identity type OnPrem can be assumed Apr 24 23:37:08.912423 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO Checking if agent identity type EC2 can be assumed Apr 24 23:37:08.991618 coreos-metadata[2203]: Apr 24 23:37:08.988 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:37:08.995266 coreos-metadata[2203]: Apr 24 23:37:08.993 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 24 23:37:08.999461 coreos-metadata[2203]: Apr 24 23:37:08.995 INFO Fetch successful Apr 24 23:37:08.999461 coreos-metadata[2203]: Apr 24 23:37:08.995 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 24 23:37:08.995854 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 23:37:08.995603 dbus-daemon[2074]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 23:37:09.001768 dbus-daemon[2074]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2152 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 23:37:09.006018 coreos-metadata[2203]: Apr 24 23:37:09.002 INFO Fetch successful Apr 24 23:37:09.007538 unknown[2203]: wrote ssh authorized keys file for user: core Apr 24 23:37:09.021682 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 23:37:09.030012 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO Agent will take identity from EC2 Apr 24 23:37:09.055558 containerd[2128]: time="2026-04-24T23:37:09.055402883Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:37:09.118172 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:37:09.199163 update-ssh-keys[2274]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:37:09.210211 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:37:09.232195 systemd[1]: Finished sshkeys.service. Apr 24 23:37:09.233276 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:37:09.249091 polkitd[2271]: Started polkitd version 121 Apr 24 23:37:09.333291 containerd[2128]: time="2026-04-24T23:37:09.330828300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:37:09.336537 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.346164960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.346240836Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.346277028Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.346592748Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.346634196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.346754112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.346783452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.347191308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.347228988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.347275728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351166 containerd[2128]: time="2026-04-24T23:37:09.347301108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:37:09.350153 polkitd[2271]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 23:37:09.351782 containerd[2128]: time="2026-04-24T23:37:09.347485080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:37:09.351782 containerd[2128]: time="2026-04-24T23:37:09.347884272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:37:09.350262 polkitd[2271]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 23:37:09.357084 polkitd[2271]: Finished loading, compiling and executing 2 rules Apr 24 23:37:09.358783 containerd[2128]: time="2026-04-24T23:37:09.358599672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:37:09.358783 containerd[2128]: time="2026-04-24T23:37:09.358651500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:37:09.359072 containerd[2128]: time="2026-04-24T23:37:09.358983960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:37:09.361189 containerd[2128]: time="2026-04-24T23:37:09.359209884Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:37:09.375082 dbus-daemon[2074]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 23:37:09.378803 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 23:37:09.381710 containerd[2128]: time="2026-04-24T23:37:09.381410388Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:37:09.381710 containerd[2128]: time="2026-04-24T23:37:09.381516216Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:37:09.381710 containerd[2128]: time="2026-04-24T23:37:09.381553656Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:37:09.381710 containerd[2128]: time="2026-04-24T23:37:09.381590076Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:37:09.381710 containerd[2128]: time="2026-04-24T23:37:09.381622920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:37:09.381992 containerd[2128]: time="2026-04-24T23:37:09.381903948Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:37:09.386870 polkitd[2271]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 23:37:09.392165 containerd[2128]: time="2026-04-24T23:37:09.389392392Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:37:09.392165 containerd[2128]: time="2026-04-24T23:37:09.389780868Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:37:09.392165 containerd[2128]: time="2026-04-24T23:37:09.389845908Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:37:09.392165 containerd[2128]: time="2026-04-24T23:37:09.390576912Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:37:09.392165 containerd[2128]: time="2026-04-24T23:37:09.390649776Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393204 containerd[2128]: time="2026-04-24T23:37:09.390693372Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393275 containerd[2128]: time="2026-04-24T23:37:09.393217416Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393324 containerd[2128]: time="2026-04-24T23:37:09.393279528Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393372 containerd[2128]: time="2026-04-24T23:37:09.393340452Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393438 containerd[2128]: time="2026-04-24T23:37:09.393378060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393487 containerd[2128]: time="2026-04-24T23:37:09.393431616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393535 containerd[2128]: time="2026-04-24T23:37:09.393467568Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:37:09.393621 containerd[2128]: time="2026-04-24T23:37:09.393562368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.393692 containerd[2128]: time="2026-04-24T23:37:09.393632256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.395682 containerd[2128]: time="2026-04-24T23:37:09.394841640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.395682 containerd[2128]: time="2026-04-24T23:37:09.394947516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.395682 containerd[2128]: time="2026-04-24T23:37:09.395023152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.395682 containerd[2128]: time="2026-04-24T23:37:09.395082540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.395682 containerd[2128]: time="2026-04-24T23:37:09.395116512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.395682 containerd[2128]: time="2026-04-24T23:37:09.395184240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.397237 containerd[2128]: time="2026-04-24T23:37:09.395218860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.397340 containerd[2128]: time="2026-04-24T23:37:09.397279824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.397389 containerd[2128]: time="2026-04-24T23:37:09.397321116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.397389 containerd[2128]: time="2026-04-24T23:37:09.397377120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.397503 containerd[2128]: time="2026-04-24T23:37:09.397411068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.397503 containerd[2128]: time="2026-04-24T23:37:09.397487856Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:37:09.397618 containerd[2128]: time="2026-04-24T23:37:09.397576560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.397674 containerd[2128]: time="2026-04-24T23:37:09.397638552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.398221 containerd[2128]: time="2026-04-24T23:37:09.397670844Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:37:09.403637 containerd[2128]: time="2026-04-24T23:37:09.403565136Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:37:09.403876 containerd[2128]: time="2026-04-24T23:37:09.403842300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:37:09.404001 containerd[2128]: time="2026-04-24T23:37:09.403974612Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:37:09.404203 containerd[2128]: time="2026-04-24T23:37:09.404069640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:37:09.404203 containerd[2128]: time="2026-04-24T23:37:09.404097816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.406179 containerd[2128]: time="2026-04-24T23:37:09.404324916Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:37:09.406179 containerd[2128]: time="2026-04-24T23:37:09.404357256Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:37:09.406179 containerd[2128]: time="2026-04-24T23:37:09.405128916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:37:09.407374 containerd[2128]: time="2026-04-24T23:37:09.407235420Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:37:09.411188 containerd[2128]: time="2026-04-24T23:37:09.410212740Z" level=info msg="Connect containerd service" Apr 24 23:37:09.411818 containerd[2128]: time="2026-04-24T23:37:09.411777144Z" level=info msg="using legacy CRI server" Apr 24 23:37:09.414178 containerd[2128]: time="2026-04-24T23:37:09.411898824Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:37:09.414178 containerd[2128]: time="2026-04-24T23:37:09.412063536Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:37:09.416607 containerd[2128]: time="2026-04-24T23:37:09.416546208Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:37:09.417873 containerd[2128]: time="2026-04-24T23:37:09.417814812Z" level=info msg="Start subscribing containerd event" Apr 24 23:37:09.422107 containerd[2128]: time="2026-04-24T23:37:09.421262617Z" level=info msg="Start recovering state" Apr 24 23:37:09.422107 containerd[2128]: time="2026-04-24T23:37:09.421640329Z" level=info msg="Start event monitor" Apr 24 23:37:09.422107 containerd[2128]: time="2026-04-24T23:37:09.421666393Z" level=info msg="Start snapshots syncer" Apr 24 23:37:09.422720 containerd[2128]: time="2026-04-24T23:37:09.422682037Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:37:09.427632 containerd[2128]: time="2026-04-24T23:37:09.422649649Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:37:09.427632 containerd[2128]: time="2026-04-24T23:37:09.426723325Z" level=info msg="Start streaming server" Apr 24 23:37:09.427632 containerd[2128]: time="2026-04-24T23:37:09.426949189Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:37:09.427632 containerd[2128]: time="2026-04-24T23:37:09.427204861Z" level=info msg="containerd successfully booted in 0.380484s" Apr 24 23:37:09.427406 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:37:09.439545 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 24 23:37:09.454504 systemd-hostnamed[2152]: Hostname set to (transient) Apr 24 23:37:09.457225 systemd-resolved[2022]: System hostname changed to 'ip-172-31-21-128'. Apr 24 23:37:09.523749 sshd_keygen[2137]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:37:09.538442 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 24 23:37:09.606901 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:37:09.631630 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:37:09.640176 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [amazon-ssm-agent] Starting Core Agent Apr 24 23:37:09.644627 systemd[1]: Started sshd@0-172.31.21.128:22-20.229.252.112:60110.service - OpenSSH per-connection server daemon (20.229.252.112:60110). Apr 24 23:37:09.667710 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:37:09.668257 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:37:09.677340 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:37:09.743172 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 24 23:37:09.755922 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:37:09.773694 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:37:09.792571 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:37:09.797410 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:37:09.843306 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [Registrar] Starting registrar module Apr 24 23:37:09.943517 amazon-ssm-agent[2157]: 2026-04-24 23:37:08 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 24 23:37:10.307612 amazon-ssm-agent[2157]: 2026-04-24 23:37:10 INFO [EC2Identity] EC2 registration was successful. Apr 24 23:37:10.324075 tar[2108]: linux-arm64/README.md Apr 24 23:37:10.340420 amazon-ssm-agent[2157]: 2026-04-24 23:37:10 INFO [CredentialRefresher] credentialRefresher has started Apr 24 23:37:10.340420 amazon-ssm-agent[2157]: 2026-04-24 23:37:10 INFO [CredentialRefresher] Starting credentials refresher loop Apr 24 23:37:10.340420 amazon-ssm-agent[2157]: 2026-04-24 23:37:10 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 24 23:37:10.347994 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:37:10.408539 amazon-ssm-agent[2157]: 2026-04-24 23:37:10 INFO [CredentialRefresher] Next credential rotation will be in 30.491659652666666 minutes Apr 24 23:37:10.718114 sshd[2342]: Accepted publickey for core from 20.229.252.112 port 60110 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:37:10.718493 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:10.735526 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:37:10.747542 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:37:10.756618 systemd-logind[2096]: New session 1 of user core. Apr 24 23:37:10.790529 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:37:10.809057 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:37:10.825106 (systemd)[2362]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:37:10.964450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:10.971812 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:37:10.989815 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:37:11.080709 systemd[2362]: Queued start job for default target default.target. Apr 24 23:37:11.081928 systemd[2362]: Created slice app.slice - User Application Slice. Apr 24 23:37:11.081976 systemd[2362]: Reached target paths.target - Paths. Apr 24 23:37:11.082008 systemd[2362]: Reached target timers.target - Timers. Apr 24 23:37:11.090482 systemd[2362]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:37:11.108213 systemd[2362]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:37:11.108523 systemd[2362]: Reached target sockets.target - Sockets. Apr 24 23:37:11.108564 systemd[2362]: Reached target basic.target - Basic System. Apr 24 23:37:11.108664 systemd[2362]: Reached target default.target - Main User Target. Apr 24 23:37:11.108728 systemd[2362]: Startup finished in 271ms. Apr 24 23:37:11.109381 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:37:11.118749 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:37:11.125302 systemd[1]: Startup finished in 10.571s (kernel) + 9.970s (userspace) = 20.542s. Apr 24 23:37:11.372692 amazon-ssm-agent[2157]: 2026-04-24 23:37:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 24 23:37:11.471652 amazon-ssm-agent[2157]: 2026-04-24 23:37:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2391) started Apr 24 23:37:11.577256 amazon-ssm-agent[2157]: 2026-04-24 23:37:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 24 23:37:11.827106 systemd[1]: Started sshd@1-172.31.21.128:22-20.229.252.112:60126.service - OpenSSH per-connection server daemon (20.229.252.112:60126). Apr 24 23:37:12.077690 kubelet[2376]: E0424 23:37:12.077517 2376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:37:12.083108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:37:12.083736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:37:12.828183 sshd[2404]: Accepted publickey for core from 20.229.252.112 port 60126 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:37:12.829778 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:12.836996 systemd-logind[2096]: New session 2 of user core. Apr 24 23:37:12.847070 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:37:13.512559 sshd[2404]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:13.519007 systemd[1]: sshd@1-172.31.21.128:22-20.229.252.112:60126.service: Deactivated successfully. Apr 24 23:37:13.525318 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:37:13.527059 systemd-logind[2096]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:37:13.529029 systemd-logind[2096]: Removed session 2. Apr 24 23:37:13.695268 systemd[1]: Started sshd@2-172.31.21.128:22-20.229.252.112:60138.service - OpenSSH per-connection server daemon (20.229.252.112:60138). Apr 24 23:37:14.713179 sshd[2414]: Accepted publickey for core from 20.229.252.112 port 60138 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:37:14.715228 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:14.722640 systemd-logind[2096]: New session 3 of user core. Apr 24 23:37:14.732716 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:37:15.094587 systemd-resolved[2022]: Clock change detected. Flushing caches. Apr 24 23:37:15.578969 sshd[2414]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:15.584415 systemd[1]: sshd@2-172.31.21.128:22-20.229.252.112:60138.service: Deactivated successfully. Apr 24 23:37:15.591244 systemd-logind[2096]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:37:15.592534 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:37:15.594599 systemd-logind[2096]: Removed session 3. Apr 24 23:37:15.737773 systemd[1]: Started sshd@3-172.31.21.128:22-20.229.252.112:60146.service - OpenSSH per-connection server daemon (20.229.252.112:60146). Apr 24 23:37:16.706884 sshd[2422]: Accepted publickey for core from 20.229.252.112 port 60146 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:37:16.709516 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:16.718234 systemd-logind[2096]: New session 4 of user core. Apr 24 23:37:16.723829 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:37:17.374560 sshd[2422]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:17.381774 systemd-logind[2096]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:37:17.383366 systemd[1]: sshd@3-172.31.21.128:22-20.229.252.112:60146.service: Deactivated successfully. Apr 24 23:37:17.387993 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:37:17.389670 systemd-logind[2096]: Removed session 4. Apr 24 23:37:17.548762 systemd[1]: Started sshd@4-172.31.21.128:22-20.229.252.112:35658.service - OpenSSH per-connection server daemon (20.229.252.112:35658). Apr 24 23:37:18.584061 sshd[2430]: Accepted publickey for core from 20.229.252.112 port 35658 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:37:18.585726 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:18.593075 systemd-logind[2096]: New session 5 of user core. Apr 24 23:37:18.601748 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:37:19.159506 sudo[2434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:37:19.160159 sudo[2434]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:37:19.176900 sudo[2434]: pam_unix(sudo:session): session closed for user root Apr 24 23:37:19.343713 sshd[2430]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:19.352056 systemd[1]: sshd@4-172.31.21.128:22-20.229.252.112:35658.service: Deactivated successfully. Apr 24 23:37:19.357416 systemd-logind[2096]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:37:19.359110 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:37:19.361342 systemd-logind[2096]: Removed session 5. Apr 24 23:37:19.498817 systemd[1]: Started sshd@5-172.31.21.128:22-20.229.252.112:35668.service - OpenSSH per-connection server daemon (20.229.252.112:35668). Apr 24 23:37:20.471332 sshd[2439]: Accepted publickey for core from 20.229.252.112 port 35668 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:37:20.473622 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:20.483179 systemd-logind[2096]: New session 6 of user core. Apr 24 23:37:20.489836 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:37:20.982826 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:37:20.984031 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:37:20.990630 sudo[2444]: pam_unix(sudo:session): session closed for user root Apr 24 23:37:21.000639 sudo[2443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:37:21.001251 sudo[2443]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:37:21.024751 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:37:21.030465 auditctl[2447]: No rules Apr 24 23:37:21.031453 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:37:21.031923 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:37:21.041254 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:37:21.096892 augenrules[2466]: No rules Apr 24 23:37:21.100444 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:37:21.104078 sudo[2443]: pam_unix(sudo:session): session closed for user root Apr 24 23:37:21.260610 sshd[2439]: pam_unix(sshd:session): session closed for user core Apr 24 23:37:21.266893 systemd[1]: sshd@5-172.31.21.128:22-20.229.252.112:35668.service: Deactivated successfully. Apr 24 23:37:21.273988 systemd-logind[2096]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:37:21.274063 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:37:21.277390 systemd-logind[2096]: Removed session 6. Apr 24 23:37:21.431725 systemd[1]: Started sshd@6-172.31.21.128:22-20.229.252.112:35678.service - OpenSSH per-connection server daemon (20.229.252.112:35678). Apr 24 23:37:22.419346 sshd[2475]: Accepted publickey for core from 20.229.252.112 port 35678 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:37:22.421094 sshd[2475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:37:22.422899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:37:22.432798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:22.438552 systemd-logind[2096]: New session 7 of user core. Apr 24 23:37:22.446792 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:37:22.809629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:22.824889 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:37:22.904622 kubelet[2491]: E0424 23:37:22.904532 2491 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:37:22.913626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:37:22.914017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:37:22.943249 sudo[2500]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:37:22.943928 sudo[2500]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:37:23.536164 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:37:23.546981 (dockerd)[2515]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:37:24.080655 dockerd[2515]: time="2026-04-24T23:37:24.080559776Z" level=info msg="Starting up" Apr 24 23:37:24.273997 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1617829727-merged.mount: Deactivated successfully. Apr 24 23:37:24.390203 systemd[1]: var-lib-docker-metacopy\x2dcheck782399225-merged.mount: Deactivated successfully. Apr 24 23:37:24.410593 dockerd[2515]: time="2026-04-24T23:37:24.410237926Z" level=info msg="Loading containers: start." Apr 24 23:37:24.622318 kernel: Initializing XFRM netlink socket Apr 24 23:37:24.662926 (udev-worker)[2536]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:37:24.757931 systemd-networkd[1690]: docker0: Link UP Apr 24 23:37:24.786747 dockerd[2515]: time="2026-04-24T23:37:24.786665508Z" level=info msg="Loading containers: done." Apr 24 23:37:24.821533 dockerd[2515]: time="2026-04-24T23:37:24.821454840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:37:24.821739 dockerd[2515]: time="2026-04-24T23:37:24.821610288Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:37:24.821834 dockerd[2515]: time="2026-04-24T23:37:24.821792124Z" level=info msg="Daemon has completed initialization" Apr 24 23:37:24.887240 dockerd[2515]: time="2026-04-24T23:37:24.886996392Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:37:24.887535 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:37:25.267975 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck218115959-merged.mount: Deactivated successfully. Apr 24 23:37:25.720014 containerd[2128]: time="2026-04-24T23:37:25.719960148Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:37:26.369893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073435341.mount: Deactivated successfully. Apr 24 23:37:27.884258 containerd[2128]: time="2026-04-24T23:37:27.884200227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:27.887921 containerd[2128]: time="2026-04-24T23:37:27.887875599Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 24 23:37:27.891392 containerd[2128]: time="2026-04-24T23:37:27.891347979Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:27.896949 containerd[2128]: time="2026-04-24T23:37:27.896875023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:27.899356 containerd[2128]: time="2026-04-24T23:37:27.899277711Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.179256495s" Apr 24 23:37:27.899542 containerd[2128]: time="2026-04-24T23:37:27.899508159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 24 23:37:27.900550 containerd[2128]: time="2026-04-24T23:37:27.900507039Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:37:29.335357 containerd[2128]: time="2026-04-24T23:37:29.335262146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:29.338273 containerd[2128]: time="2026-04-24T23:37:29.338223062Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 24 23:37:29.339554 containerd[2128]: time="2026-04-24T23:37:29.339487082Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:29.346349 containerd[2128]: time="2026-04-24T23:37:29.345732674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:29.353965 containerd[2128]: time="2026-04-24T23:37:29.353903654Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.453179571s" Apr 24 23:37:29.354150 containerd[2128]: time="2026-04-24T23:37:29.354120782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 24 23:37:29.355252 containerd[2128]: time="2026-04-24T23:37:29.355188422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:37:30.507177 containerd[2128]: time="2026-04-24T23:37:30.507089884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:30.510194 containerd[2128]: time="2026-04-24T23:37:30.509705872Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 24 23:37:30.512329 containerd[2128]: time="2026-04-24T23:37:30.512254396Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:30.520325 containerd[2128]: time="2026-04-24T23:37:30.518615008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:30.521128 containerd[2128]: time="2026-04-24T23:37:30.521082316Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.165829514s" Apr 24 23:37:30.521268 containerd[2128]: time="2026-04-24T23:37:30.521239732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 24 23:37:30.522575 containerd[2128]: time="2026-04-24T23:37:30.522520216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:37:31.796428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608474617.mount: Deactivated successfully. Apr 24 23:37:32.440382 containerd[2128]: time="2026-04-24T23:37:32.440317710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:32.447469 containerd[2128]: time="2026-04-24T23:37:32.447383946Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 24 23:37:32.507481 containerd[2128]: time="2026-04-24T23:37:32.507420234Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:32.574426 containerd[2128]: time="2026-04-24T23:37:32.573938298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:32.575919 containerd[2128]: time="2026-04-24T23:37:32.575400042Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 2.052819862s" Apr 24 23:37:32.575919 containerd[2128]: time="2026-04-24T23:37:32.575460030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 24 23:37:32.576462 containerd[2128]: time="2026-04-24T23:37:32.576415002Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:37:33.097764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:37:33.111742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:33.150214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415663051.mount: Deactivated successfully. Apr 24 23:37:33.517641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:33.534520 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:37:33.654124 kubelet[2751]: E0424 23:37:33.652954 2751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:37:33.673271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:37:33.673758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:37:34.424796 containerd[2128]: time="2026-04-24T23:37:34.424706084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:34.427448 containerd[2128]: time="2026-04-24T23:37:34.427387892Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 24 23:37:34.429522 containerd[2128]: time="2026-04-24T23:37:34.429437564Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:34.436732 containerd[2128]: time="2026-04-24T23:37:34.436606940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:34.439345 containerd[2128]: time="2026-04-24T23:37:34.439116392Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.862533186s" Apr 24 23:37:34.439345 containerd[2128]: time="2026-04-24T23:37:34.439173428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 24 23:37:34.440373 containerd[2128]: time="2026-04-24T23:37:34.440308796Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:37:34.924245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206267113.mount: Deactivated successfully. Apr 24 23:37:34.931314 containerd[2128]: time="2026-04-24T23:37:34.931231558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:34.932997 containerd[2128]: time="2026-04-24T23:37:34.932933398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 24 23:37:34.934699 containerd[2128]: time="2026-04-24T23:37:34.934087138Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:34.938323 containerd[2128]: time="2026-04-24T23:37:34.938204218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:34.941502 containerd[2128]: time="2026-04-24T23:37:34.941436670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 501.067142ms" Apr 24 23:37:34.941663 containerd[2128]: time="2026-04-24T23:37:34.941497618Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 24 23:37:34.942324 containerd[2128]: time="2026-04-24T23:37:34.942138706Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:37:35.492600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2488423659.mount: Deactivated successfully. Apr 24 23:37:37.535049 containerd[2128]: time="2026-04-24T23:37:37.534700007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:37.537746 containerd[2128]: time="2026-04-24T23:37:37.537670127Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 24 23:37:37.540493 containerd[2128]: time="2026-04-24T23:37:37.540433751Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:37.547877 containerd[2128]: time="2026-04-24T23:37:37.547814639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:37.558569 containerd[2128]: time="2026-04-24T23:37:37.557373503Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.615174617s" Apr 24 23:37:37.558569 containerd[2128]: time="2026-04-24T23:37:37.557440823Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 24 23:37:39.656589 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 23:37:43.894776 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:37:43.906135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:44.321549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:44.341905 (kubelet)[2904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:37:44.411758 kubelet[2904]: E0424 23:37:44.411688 2904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:37:44.419621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:37:44.420074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:37:46.604163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:46.612804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:46.677675 systemd[1]: Reloading requested from client PID 2920 ('systemctl') (unit session-7.scope)... Apr 24 23:37:46.677713 systemd[1]: Reloading... Apr 24 23:37:46.865334 zram_generator::config[2960]: No configuration found. Apr 24 23:37:47.173224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:47.343934 systemd[1]: Reloading finished in 665 ms. Apr 24 23:37:47.433041 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:37:47.433942 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:37:47.434820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:47.445116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:47.745603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:47.762996 (kubelet)[3035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:37:47.837433 kubelet[3035]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:47.838007 kubelet[3035]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:37:47.838091 kubelet[3035]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:47.838332 kubelet[3035]: I0424 23:37:47.838265 3035 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:37:48.465157 kubelet[3035]: I0424 23:37:48.465098 3035 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:37:48.466540 kubelet[3035]: I0424 23:37:48.466453 3035 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:37:48.468314 kubelet[3035]: I0424 23:37:48.467320 3035 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:37:48.514396 kubelet[3035]: I0424 23:37:48.514358 3035 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:37:48.519436 kubelet[3035]: E0424 23:37:48.519350 3035 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:37:48.530386 kubelet[3035]: E0424 23:37:48.530337 3035 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:37:48.530599 kubelet[3035]: I0424 23:37:48.530578 3035 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:37:48.536656 kubelet[3035]: I0424 23:37:48.536620 3035 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:37:48.537628 kubelet[3035]: I0424 23:37:48.537580 3035 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:37:48.537957 kubelet[3035]: I0424 23:37:48.537716 3035 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-128","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:37:48.538189 kubelet[3035]: I0424 23:37:48.538167 3035 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:37:48.538319 kubelet[3035]: I0424 23:37:48.538281 3035 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:37:48.538938 kubelet[3035]: I0424 23:37:48.538721 3035 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:48.546181 kubelet[3035]: I0424 23:37:48.546148 3035 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:37:48.546375 kubelet[3035]: I0424 23:37:48.546355 3035 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:37:48.546612 kubelet[3035]: I0424 23:37:48.546505 3035 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:37:48.548953 kubelet[3035]: I0424 23:37:48.548931 3035 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:37:48.554337 kubelet[3035]: E0424 23:37:48.553424 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-128&limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:37:48.554337 kubelet[3035]: E0424 23:37:48.554239 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:37:48.554852 kubelet[3035]: I0424 23:37:48.554807 3035 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:37:48.555972 kubelet[3035]: I0424 23:37:48.555912 3035 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:37:48.556226 kubelet[3035]: W0424 23:37:48.556192 3035 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:37:48.561444 kubelet[3035]: I0424 23:37:48.561362 3035 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:37:48.561444 kubelet[3035]: I0424 23:37:48.561444 3035 server.go:1289] "Started kubelet" Apr 24 23:37:48.566342 kubelet[3035]: I0424 23:37:48.566192 3035 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:37:48.580335 kubelet[3035]: I0424 23:37:48.580277 3035 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:37:48.580702 kubelet[3035]: I0424 23:37:48.580661 3035 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:37:48.582108 kubelet[3035]: I0424 23:37:48.582032 3035 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:37:48.589043 kubelet[3035]: I0424 23:37:48.589004 3035 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:37:48.593628 kubelet[3035]: E0424 23:37:48.583732 3035 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:37:48.593817 kubelet[3035]: I0424 23:37:48.585674 3035 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:37:48.594421 kubelet[3035]: I0424 23:37:48.584470 3035 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:37:48.595969 kubelet[3035]: E0424 23:37:48.585930 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-128\" not found" Apr 24 23:37:48.596150 kubelet[3035]: I0424 23:37:48.585709 3035 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:37:48.596362 kubelet[3035]: I0424 23:37:48.596340 3035 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:37:48.596750 kubelet[3035]: E0424 23:37:48.596707 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-128?timeout=10s\": dial tcp 172.31.21.128:6443: connect: connection refused" interval="200ms" Apr 24 23:37:48.599475 kubelet[3035]: E0424 23:37:48.597360 3035 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.128:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.128:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-128.18a96f44a33b3d8a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-128,UID:ip-172-31-21-128,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-128,},FirstTimestamp:2026-04-24 23:37:48.561399178 +0000 UTC m=+0.790502333,LastTimestamp:2026-04-24 23:37:48.561399178 +0000 UTC m=+0.790502333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-128,}" Apr 24 23:37:48.601515 kubelet[3035]: E0424 23:37:48.601036 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:37:48.602338 kubelet[3035]: I0424 23:37:48.602203 3035 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:37:48.602338 kubelet[3035]: I0424 23:37:48.602231 3035 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:37:48.604202 kubelet[3035]: I0424 23:37:48.602742 3035 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:37:48.614980 kubelet[3035]: I0424 23:37:48.614900 3035 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:37:48.617064 kubelet[3035]: I0424 23:37:48.617002 3035 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:37:48.617064 kubelet[3035]: I0424 23:37:48.617065 3035 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:37:48.617259 kubelet[3035]: I0424 23:37:48.617109 3035 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:37:48.617259 kubelet[3035]: I0424 23:37:48.617125 3035 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:37:48.617259 kubelet[3035]: E0424 23:37:48.617193 3035 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:37:48.643874 kubelet[3035]: E0424 23:37:48.643816 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:37:48.656630 kubelet[3035]: I0424 23:37:48.656599 3035 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:37:48.656907 kubelet[3035]: I0424 23:37:48.656854 3035 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:37:48.657083 kubelet[3035]: I0424 23:37:48.657065 3035 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:48.659275 kubelet[3035]: I0424 23:37:48.659249 3035 policy_none.go:49] "None policy: Start" Apr 24 23:37:48.659435 kubelet[3035]: I0424 23:37:48.659416 3035 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:37:48.659533 kubelet[3035]: I0424 23:37:48.659516 3035 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:37:48.671945 kubelet[3035]: E0424 23:37:48.671863 3035 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:37:48.674315 kubelet[3035]: I0424 23:37:48.672232 3035 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:37:48.674315 kubelet[3035]: I0424 23:37:48.672304 3035 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:37:48.675365 kubelet[3035]: I0424 23:37:48.675308 3035 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:37:48.678117 kubelet[3035]: E0424 23:37:48.678075 3035 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:37:48.678376 kubelet[3035]: E0424 23:37:48.678140 3035 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-128\" not found" Apr 24 23:37:48.734732 kubelet[3035]: E0424 23:37:48.734245 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:48.738436 kubelet[3035]: E0424 23:37:48.737694 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:48.747320 kubelet[3035]: E0424 23:37:48.744993 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:48.775359 kubelet[3035]: I0424 23:37:48.775309 3035 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-128" Apr 24 23:37:48.775867 kubelet[3035]: E0424 23:37:48.775805 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.128:6443/api/v1/nodes\": dial tcp 172.31.21.128:6443: connect: connection refused" node="ip-172-31-21-128" Apr 24 23:37:48.797456 kubelet[3035]: I0424 23:37:48.797419 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:48.797890 kubelet[3035]: E0424 23:37:48.797740 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-128?timeout=10s\": dial tcp 172.31.21.128:6443: connect: connection refused" interval="400ms" Apr 24 23:37:48.798001 kubelet[3035]: I0424 23:37:48.797856 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737851549fca76e1279dd5ab9df56cec-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-128\" (UID: \"737851549fca76e1279dd5ab9df56cec\") " pod="kube-system/kube-scheduler-ip-172-31-21-128" Apr 24 23:37:48.798188 kubelet[3035]: I0424 23:37:48.798151 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa938e6a30a35fcac36fb1142a9f160e-ca-certs\") pod \"kube-apiserver-ip-172-31-21-128\" (UID: \"aa938e6a30a35fcac36fb1142a9f160e\") " pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:48.798378 kubelet[3035]: I0424 23:37:48.798351 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa938e6a30a35fcac36fb1142a9f160e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-128\" (UID: \"aa938e6a30a35fcac36fb1142a9f160e\") " pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:48.798529 kubelet[3035]: I0424 23:37:48.798506 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:48.798709 kubelet[3035]: I0424 23:37:48.798672 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:48.798859 kubelet[3035]: I0424 23:37:48.798835 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa938e6a30a35fcac36fb1142a9f160e-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-128\" (UID: \"aa938e6a30a35fcac36fb1142a9f160e\") " pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:48.799004 kubelet[3035]: I0424 23:37:48.798982 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:48.799158 kubelet[3035]: I0424 23:37:48.799135 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:48.978500 kubelet[3035]: I0424 23:37:48.978454 3035 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-128" Apr 24 23:37:48.979846 kubelet[3035]: E0424 23:37:48.979795 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.128:6443/api/v1/nodes\": dial tcp 172.31.21.128:6443: connect: connection refused" node="ip-172-31-21-128" Apr 24 23:37:49.036636 containerd[2128]: time="2026-04-24T23:37:49.036173132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-128,Uid:aa938e6a30a35fcac36fb1142a9f160e,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:49.039695 containerd[2128]: time="2026-04-24T23:37:49.039632324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-128,Uid:3428cfbd1959e6908960005789e09488,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:49.049128 containerd[2128]: time="2026-04-24T23:37:49.048656672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-128,Uid:737851549fca76e1279dd5ab9df56cec,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:49.198899 kubelet[3035]: E0424 23:37:49.198847 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-128?timeout=10s\": dial tcp 172.31.21.128:6443: connect: connection refused" interval="800ms" Apr 24 23:37:49.382415 kubelet[3035]: I0424 23:37:49.381761 3035 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-128" Apr 24 23:37:49.382415 kubelet[3035]: E0424 23:37:49.382209 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.128:6443/api/v1/nodes\": dial tcp 172.31.21.128:6443: connect: connection refused" node="ip-172-31-21-128" Apr 24 23:37:49.515017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193179850.mount: Deactivated successfully. Apr 24 23:37:49.520043 containerd[2128]: time="2026-04-24T23:37:49.519513094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:49.523822 containerd[2128]: time="2026-04-24T23:37:49.523436243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 24 23:37:49.524658 containerd[2128]: time="2026-04-24T23:37:49.524554703Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:49.526484 containerd[2128]: time="2026-04-24T23:37:49.526425215Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:49.528875 containerd[2128]: time="2026-04-24T23:37:49.528818195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:37:49.529577 containerd[2128]: time="2026-04-24T23:37:49.529354259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:37:49.530221 containerd[2128]: time="2026-04-24T23:37:49.530091707Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:49.537894 containerd[2128]: time="2026-04-24T23:37:49.537812255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:37:49.541624 containerd[2128]: time="2026-04-24T23:37:49.541209371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.454071ms" Apr 24 23:37:49.544840 containerd[2128]: time="2026-04-24T23:37:49.544769147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.005747ms" Apr 24 23:37:49.546148 containerd[2128]: time="2026-04-24T23:37:49.546080615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.798595ms" Apr 24 23:37:49.573063 kubelet[3035]: E0424 23:37:49.571888 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:37:49.748628 containerd[2128]: time="2026-04-24T23:37:49.748445448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:49.748628 containerd[2128]: time="2026-04-24T23:37:49.748551756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:49.748934 containerd[2128]: time="2026-04-24T23:37:49.748604352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:49.749260 containerd[2128]: time="2026-04-24T23:37:49.749184288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:49.750359 containerd[2128]: time="2026-04-24T23:37:49.750061776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:49.750359 containerd[2128]: time="2026-04-24T23:37:49.750161880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:49.750359 containerd[2128]: time="2026-04-24T23:37:49.750199020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:49.750900 containerd[2128]: time="2026-04-24T23:37:49.750778116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:49.756675 containerd[2128]: time="2026-04-24T23:37:49.756522108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:49.757059 containerd[2128]: time="2026-04-24T23:37:49.756643848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:49.757059 containerd[2128]: time="2026-04-24T23:37:49.756914052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:49.757933 containerd[2128]: time="2026-04-24T23:37:49.757567920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:49.766063 kubelet[3035]: E0424 23:37:49.765899 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:37:49.859888 kubelet[3035]: E0424 23:37:49.859807 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:37:49.910738 containerd[2128]: time="2026-04-24T23:37:49.910625448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-128,Uid:aa938e6a30a35fcac36fb1142a9f160e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3865d5cf7305568ff3332c869227a0961185be9fda71bff7eda7fc7d49a8e6fa\"" Apr 24 23:37:49.922994 containerd[2128]: time="2026-04-24T23:37:49.922943568Z" level=info msg="CreateContainer within sandbox \"3865d5cf7305568ff3332c869227a0961185be9fda71bff7eda7fc7d49a8e6fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:37:49.929412 containerd[2128]: time="2026-04-24T23:37:49.929343133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-128,Uid:737851549fca76e1279dd5ab9df56cec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a5f821c09c7e338583fcd21a893146faada4c31f076d271cc961765e88b1fdf\"" Apr 24 23:37:49.936962 kubelet[3035]: E0424 23:37:49.936868 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-128&limit=500&resourceVersion=0\": dial tcp 172.31.21.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:37:49.939595 containerd[2128]: time="2026-04-24T23:37:49.939460489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-128,Uid:3428cfbd1959e6908960005789e09488,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8484fb998ade7652c3a986efaa22e71cb315d9707f768563979768bacecc27a\"" Apr 24 23:37:49.941023 containerd[2128]: time="2026-04-24T23:37:49.940914517Z" level=info msg="CreateContainer within sandbox \"3a5f821c09c7e338583fcd21a893146faada4c31f076d271cc961765e88b1fdf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:37:49.948872 containerd[2128]: time="2026-04-24T23:37:49.948818377Z" level=info msg="CreateContainer within sandbox \"e8484fb998ade7652c3a986efaa22e71cb315d9707f768563979768bacecc27a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:37:49.955527 containerd[2128]: time="2026-04-24T23:37:49.955057369Z" level=info msg="CreateContainer within sandbox \"3865d5cf7305568ff3332c869227a0961185be9fda71bff7eda7fc7d49a8e6fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1b976279ecb0def99f2ad2dbd070da091bab7a461773707b365690bb9949a0bc\"" Apr 24 23:37:49.957027 containerd[2128]: time="2026-04-24T23:37:49.956978737Z" level=info msg="StartContainer for \"1b976279ecb0def99f2ad2dbd070da091bab7a461773707b365690bb9949a0bc\"" Apr 24 23:37:49.970654 containerd[2128]: time="2026-04-24T23:37:49.970598089Z" level=info msg="CreateContainer within sandbox \"3a5f821c09c7e338583fcd21a893146faada4c31f076d271cc961765e88b1fdf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff1f5b823014dbd0d12cec3337b18d0916c50f97b61cf23be6c8fc764a913169\"" Apr 24 23:37:49.972819 containerd[2128]: time="2026-04-24T23:37:49.972749185Z" level=info msg="StartContainer for \"ff1f5b823014dbd0d12cec3337b18d0916c50f97b61cf23be6c8fc764a913169\"" Apr 24 23:37:49.973586 containerd[2128]: time="2026-04-24T23:37:49.973452661Z" level=info msg="CreateContainer within sandbox \"e8484fb998ade7652c3a986efaa22e71cb315d9707f768563979768bacecc27a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"beeed2ab12d69f8f69b70b296415445c9dcffcdec4b67214483647fc29b3fed6\"" Apr 24 23:37:49.974338 containerd[2128]: time="2026-04-24T23:37:49.974226181Z" level=info msg="StartContainer for \"beeed2ab12d69f8f69b70b296415445c9dcffcdec4b67214483647fc29b3fed6\"" Apr 24 23:37:50.001553 kubelet[3035]: E0424 23:37:50.000512 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-128?timeout=10s\": dial tcp 172.31.21.128:6443: connect: connection refused" interval="1.6s" Apr 24 23:37:50.155244 containerd[2128]: time="2026-04-24T23:37:50.153624274Z" level=info msg="StartContainer for \"1b976279ecb0def99f2ad2dbd070da091bab7a461773707b365690bb9949a0bc\" returns successfully" Apr 24 23:37:50.167247 containerd[2128]: time="2026-04-24T23:37:50.166001770Z" level=info msg="StartContainer for \"beeed2ab12d69f8f69b70b296415445c9dcffcdec4b67214483647fc29b3fed6\" returns successfully" Apr 24 23:37:50.194391 kubelet[3035]: I0424 23:37:50.193173 3035 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-128" Apr 24 23:37:50.194391 kubelet[3035]: E0424 23:37:50.193691 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.128:6443/api/v1/nodes\": dial tcp 172.31.21.128:6443: connect: connection refused" node="ip-172-31-21-128" Apr 24 23:37:50.252063 containerd[2128]: time="2026-04-24T23:37:50.251534374Z" level=info msg="StartContainer for \"ff1f5b823014dbd0d12cec3337b18d0916c50f97b61cf23be6c8fc764a913169\" returns successfully" Apr 24 23:37:50.669412 kubelet[3035]: E0424 23:37:50.669354 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:50.679322 kubelet[3035]: E0424 23:37:50.678819 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:50.683075 kubelet[3035]: E0424 23:37:50.683038 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:51.687319 kubelet[3035]: E0424 23:37:51.684890 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:51.690961 kubelet[3035]: E0424 23:37:51.689120 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:51.798321 kubelet[3035]: I0424 23:37:51.797514 3035 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-128" Apr 24 23:37:52.686322 kubelet[3035]: E0424 23:37:52.683860 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:52.957446 kubelet[3035]: E0424 23:37:52.957250 3035 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-128\" not found" node="ip-172-31-21-128" Apr 24 23:37:52.962680 kubelet[3035]: I0424 23:37:52.962558 3035 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-128" Apr 24 23:37:52.962844 kubelet[3035]: E0424 23:37:52.962791 3035 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-128\": node \"ip-172-31-21-128\" not found" Apr 24 23:37:52.986485 kubelet[3035]: I0424 23:37:52.986427 3035 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:53.032997 kubelet[3035]: E0424 23:37:53.032926 3035 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-128\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:53.032997 kubelet[3035]: I0424 23:37:53.032988 3035 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:53.043343 kubelet[3035]: E0424 23:37:53.043161 3035 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-128\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:53.043343 kubelet[3035]: I0424 23:37:53.043212 3035 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-128" Apr 24 23:37:53.049319 kubelet[3035]: E0424 23:37:53.046911 3035 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-128\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-128" Apr 24 23:37:53.275062 kubelet[3035]: I0424 23:37:53.274899 3035 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:53.280149 kubelet[3035]: E0424 23:37:53.280083 3035 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-128\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:53.558034 kubelet[3035]: I0424 23:37:53.557258 3035 apiserver.go:52] "Watching apiserver" Apr 24 23:37:53.597083 kubelet[3035]: I0424 23:37:53.597027 3035 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:37:53.766449 update_engine[2098]: I20260424 23:37:53.766359 2098 update_attempter.cc:509] Updating boot flags... Apr 24 23:37:53.914380 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3334) Apr 24 23:37:54.411084 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3334) Apr 24 23:37:55.248933 systemd[1]: Reloading requested from client PID 3504 ('systemctl') (unit session-7.scope)... Apr 24 23:37:55.249450 systemd[1]: Reloading... Apr 24 23:37:55.423340 zram_generator::config[3547]: No configuration found. Apr 24 23:37:55.660201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:37:55.851493 systemd[1]: Reloading finished in 601 ms. Apr 24 23:37:55.914489 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:55.931992 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:37:55.932611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:55.944158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:37:56.276697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:37:56.291131 (kubelet)[3614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:37:56.386941 kubelet[3614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:56.386941 kubelet[3614]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:37:56.386941 kubelet[3614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:37:56.386941 kubelet[3614]: I0424 23:37:56.386136 3614 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:37:56.397714 kubelet[3614]: I0424 23:37:56.397633 3614 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:37:56.397714 kubelet[3614]: I0424 23:37:56.397697 3614 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:37:56.398340 kubelet[3614]: I0424 23:37:56.398108 3614 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:37:56.400844 kubelet[3614]: I0424 23:37:56.400794 3614 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:37:56.411352 kubelet[3614]: I0424 23:37:56.410252 3614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:37:56.421349 kubelet[3614]: E0424 23:37:56.421238 3614 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:37:56.422005 kubelet[3614]: I0424 23:37:56.421971 3614 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:37:56.427241 kubelet[3614]: I0424 23:37:56.427190 3614 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:37:56.428150 kubelet[3614]: I0424 23:37:56.428082 3614 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:37:56.428433 kubelet[3614]: I0424 23:37:56.428135 3614 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-128","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:37:56.428595 kubelet[3614]: I0424 23:37:56.428437 3614 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:37:56.428595 kubelet[3614]: I0424 23:37:56.428458 3614 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:37:56.428595 kubelet[3614]: I0424 23:37:56.428545 3614 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:56.428814 kubelet[3614]: I0424 23:37:56.428788 3614 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:37:56.428889 kubelet[3614]: I0424 23:37:56.428828 3614 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:37:56.428889 kubelet[3614]: I0424 23:37:56.428879 3614 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:37:56.428983 kubelet[3614]: I0424 23:37:56.428910 3614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:37:56.436769 sudo[3628]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 24 23:37:56.440185 kubelet[3614]: I0424 23:37:56.437804 3614 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:37:56.440185 kubelet[3614]: I0424 23:37:56.439689 3614 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:37:56.438726 sudo[3628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 24 23:37:56.447028 kubelet[3614]: I0424 23:37:56.446418 3614 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:37:56.447028 kubelet[3614]: I0424 23:37:56.446476 3614 server.go:1289] "Started kubelet" Apr 24 23:37:56.473609 kubelet[3614]: I0424 23:37:56.473542 3614 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:37:56.474172 kubelet[3614]: I0424 23:37:56.474145 3614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:37:56.478330 kubelet[3614]: I0424 23:37:56.478067 3614 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:37:56.495106 kubelet[3614]: I0424 23:37:56.494792 3614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:37:56.498384 kubelet[3614]: I0424 23:37:56.496818 3614 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:37:56.498384 kubelet[3614]: I0424 23:37:56.496981 3614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:37:56.503749 kubelet[3614]: I0424 23:37:56.503567 3614 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:37:56.510851 kubelet[3614]: I0424 23:37:56.510786 3614 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:37:56.511267 kubelet[3614]: I0424 23:37:56.511247 3614 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:37:56.533722 kubelet[3614]: I0424 23:37:56.532690 3614 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:37:56.550343 kubelet[3614]: E0424 23:37:56.550278 3614 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:37:56.554855 kubelet[3614]: I0424 23:37:56.553076 3614 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:37:56.554855 kubelet[3614]: I0424 23:37:56.554789 3614 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:37:56.586631 kubelet[3614]: I0424 23:37:56.586403 3614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:37:56.589698 kubelet[3614]: I0424 23:37:56.589643 3614 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:37:56.589698 kubelet[3614]: I0424 23:37:56.589704 3614 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:37:56.589906 kubelet[3614]: I0424 23:37:56.589740 3614 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:37:56.589906 kubelet[3614]: I0424 23:37:56.589755 3614 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:37:56.589906 kubelet[3614]: E0424 23:37:56.589818 3614 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:37:56.691042 kubelet[3614]: E0424 23:37:56.690531 3614 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734475 3614 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734507 3614 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734543 3614 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734766 3614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734786 3614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734818 3614 policy_none.go:49] "None policy: Start" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734835 3614 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.734855 3614 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:37:56.735362 kubelet[3614]: I0424 23:37:56.735013 3614 state_mem.go:75] "Updated machine memory state" Apr 24 23:37:56.746519 kubelet[3614]: E0424 23:37:56.744700 3614 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:37:56.746519 kubelet[3614]: I0424 23:37:56.744969 3614 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:37:56.746519 kubelet[3614]: I0424 23:37:56.744988 3614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:37:56.747268 kubelet[3614]: I0424 23:37:56.747230 3614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:37:56.753561 kubelet[3614]: E0424 23:37:56.753510 3614 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:37:56.860830 kubelet[3614]: I0424 23:37:56.860690 3614 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-128" Apr 24 23:37:56.874610 kubelet[3614]: I0424 23:37:56.874572 3614 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-128" Apr 24 23:37:56.875702 kubelet[3614]: I0424 23:37:56.875669 3614 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-128" Apr 24 23:37:56.893501 kubelet[3614]: I0424 23:37:56.893203 3614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-128" Apr 24 23:37:56.897620 kubelet[3614]: I0424 23:37:56.895344 3614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:56.898423 kubelet[3614]: I0424 23:37:56.896246 3614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:56.920108 kubelet[3614]: I0424 23:37:56.919849 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa938e6a30a35fcac36fb1142a9f160e-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-128\" (UID: \"aa938e6a30a35fcac36fb1142a9f160e\") " pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:56.920108 kubelet[3614]: I0424 23:37:56.919930 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa938e6a30a35fcac36fb1142a9f160e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-128\" (UID: \"aa938e6a30a35fcac36fb1142a9f160e\") " pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:56.920108 kubelet[3614]: I0424 23:37:56.919971 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:56.920108 kubelet[3614]: I0424 23:37:56.920009 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:56.920108 kubelet[3614]: I0424 23:37:56.920046 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:56.922584 kubelet[3614]: I0424 23:37:56.920082 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:56.922584 kubelet[3614]: I0424 23:37:56.920120 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737851549fca76e1279dd5ab9df56cec-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-128\" (UID: \"737851549fca76e1279dd5ab9df56cec\") " pod="kube-system/kube-scheduler-ip-172-31-21-128" Apr 24 23:37:56.922584 kubelet[3614]: I0424 23:37:56.920157 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa938e6a30a35fcac36fb1142a9f160e-ca-certs\") pod \"kube-apiserver-ip-172-31-21-128\" (UID: \"aa938e6a30a35fcac36fb1142a9f160e\") " pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:56.922584 kubelet[3614]: I0424 23:37:56.920196 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3428cfbd1959e6908960005789e09488-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-128\" (UID: \"3428cfbd1959e6908960005789e09488\") " pod="kube-system/kube-controller-manager-ip-172-31-21-128" Apr 24 23:37:57.434248 kubelet[3614]: I0424 23:37:57.434183 3614 apiserver.go:52] "Watching apiserver" Apr 24 23:37:57.505609 sudo[3628]: pam_unix(sudo:session): session closed for user root Apr 24 23:37:57.512611 kubelet[3614]: I0424 23:37:57.511475 3614 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:37:57.637758 kubelet[3614]: I0424 23:37:57.637703 3614 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:57.650322 kubelet[3614]: E0424 23:37:57.650205 3614 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-128\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-128" Apr 24 23:37:57.686357 kubelet[3614]: I0424 23:37:57.686126 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-128" podStartSLOduration=1.6861082870000001 podStartE2EDuration="1.686108287s" podCreationTimestamp="2026-04-24 23:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:57.685706215 +0000 UTC m=+1.382052848" watchObservedRunningTime="2026-04-24 23:37:57.686108287 +0000 UTC m=+1.382454920" Apr 24 23:37:57.730824 kubelet[3614]: I0424 23:37:57.730491 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-128" podStartSLOduration=1.730467751 podStartE2EDuration="1.730467751s" podCreationTimestamp="2026-04-24 23:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:57.706969327 +0000 UTC m=+1.403315972" watchObservedRunningTime="2026-04-24 23:37:57.730467751 +0000 UTC m=+1.426814396" Apr 24 23:37:57.749438 kubelet[3614]: I0424 23:37:57.747307 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-128" podStartSLOduration=1.747267643 podStartE2EDuration="1.747267643s" podCreationTimestamp="2026-04-24 23:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:57.731084563 +0000 UTC m=+1.427431196" watchObservedRunningTime="2026-04-24 23:37:57.747267643 +0000 UTC m=+1.443614288" Apr 24 23:38:00.680429 sudo[2500]: pam_unix(sudo:session): session closed for user root Apr 24 23:38:00.840744 sshd[2475]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:00.849780 systemd-logind[2096]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:38:00.852462 systemd[1]: sshd@6-172.31.21.128:22-20.229.252.112:35678.service: Deactivated successfully. Apr 24 23:38:00.860750 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:38:00.863776 systemd-logind[2096]: Removed session 7. Apr 24 23:38:01.727897 kubelet[3614]: I0424 23:38:01.727377 3614 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:38:01.728567 kubelet[3614]: I0424 23:38:01.728282 3614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:38:01.728630 containerd[2128]: time="2026-04-24T23:38:01.727904771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:38:02.459248 kubelet[3614]: I0424 23:38:02.459187 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-bpf-maps\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459537 kubelet[3614]: I0424 23:38:02.459271 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-hostproc\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459537 kubelet[3614]: I0424 23:38:02.459333 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-lib-modules\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459537 kubelet[3614]: I0424 23:38:02.459381 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7169284b-7636-44ec-822f-6441435f2375-clustermesh-secrets\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459537 kubelet[3614]: I0424 23:38:02.459419 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7169284b-7636-44ec-822f-6441435f2375-cilium-config-path\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459537 kubelet[3614]: I0424 23:38:02.459457 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e109d8e6-7be4-45b9-8532-52f834d39534-xtables-lock\") pod \"kube-proxy-xrf2k\" (UID: \"e109d8e6-7be4-45b9-8532-52f834d39534\") " pod="kube-system/kube-proxy-xrf2k" Apr 24 23:38:02.459537 kubelet[3614]: I0424 23:38:02.459498 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-cgroup\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459838 kubelet[3614]: I0424 23:38:02.459531 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-etc-cni-netd\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459838 kubelet[3614]: I0424 23:38:02.459568 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-xtables-lock\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459838 kubelet[3614]: I0424 23:38:02.459606 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-net\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459838 kubelet[3614]: I0424 23:38:02.459640 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-kernel\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.459838 kubelet[3614]: I0424 23:38:02.459689 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8b2c\" (UniqueName: \"kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-kube-api-access-n8b2c\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.462060 kubelet[3614]: I0424 23:38:02.459722 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e109d8e6-7be4-45b9-8532-52f834d39534-lib-modules\") pod \"kube-proxy-xrf2k\" (UID: \"e109d8e6-7be4-45b9-8532-52f834d39534\") " pod="kube-system/kube-proxy-xrf2k" Apr 24 23:38:02.462060 kubelet[3614]: I0424 23:38:02.461352 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-run\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.462060 kubelet[3614]: I0424 23:38:02.461436 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e109d8e6-7be4-45b9-8532-52f834d39534-kube-proxy\") pod \"kube-proxy-xrf2k\" (UID: \"e109d8e6-7be4-45b9-8532-52f834d39534\") " pod="kube-system/kube-proxy-xrf2k" Apr 24 23:38:02.462060 kubelet[3614]: I0424 23:38:02.461508 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csn47\" (UniqueName: \"kubernetes.io/projected/e109d8e6-7be4-45b9-8532-52f834d39534-kube-api-access-csn47\") pod \"kube-proxy-xrf2k\" (UID: \"e109d8e6-7be4-45b9-8532-52f834d39534\") " pod="kube-system/kube-proxy-xrf2k" Apr 24 23:38:02.462060 kubelet[3614]: I0424 23:38:02.461590 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cni-path\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.462060 kubelet[3614]: I0424 23:38:02.461637 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-hubble-tls\") pod \"cilium-zdhg7\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " pod="kube-system/cilium-zdhg7" Apr 24 23:38:02.607408 kubelet[3614]: E0424 23:38:02.607283 3614 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 24 23:38:02.607408 kubelet[3614]: E0424 23:38:02.607395 3614 projected.go:194] Error preparing data for projected volume kube-api-access-csn47 for pod kube-system/kube-proxy-xrf2k: configmap "kube-root-ca.crt" not found Apr 24 23:38:02.607605 kubelet[3614]: E0424 23:38:02.607516 3614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e109d8e6-7be4-45b9-8532-52f834d39534-kube-api-access-csn47 podName:e109d8e6-7be4-45b9-8532-52f834d39534 nodeName:}" failed. No retries permitted until 2026-04-24 23:38:03.107481807 +0000 UTC m=+6.803828428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-csn47" (UniqueName: "kubernetes.io/projected/e109d8e6-7be4-45b9-8532-52f834d39534-kube-api-access-csn47") pod "kube-proxy-xrf2k" (UID: "e109d8e6-7be4-45b9-8532-52f834d39534") : configmap "kube-root-ca.crt" not found Apr 24 23:38:02.629599 kubelet[3614]: E0424 23:38:02.629542 3614 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 24 23:38:02.629599 kubelet[3614]: E0424 23:38:02.629600 3614 projected.go:194] Error preparing data for projected volume kube-api-access-n8b2c for pod kube-system/cilium-zdhg7: configmap "kube-root-ca.crt" not found Apr 24 23:38:02.629831 kubelet[3614]: E0424 23:38:02.629701 3614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-kube-api-access-n8b2c podName:7169284b-7636-44ec-822f-6441435f2375 nodeName:}" failed. No retries permitted until 2026-04-24 23:38:03.129672508 +0000 UTC m=+6.826019141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n8b2c" (UniqueName: "kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-kube-api-access-n8b2c") pod "cilium-zdhg7" (UID: "7169284b-7636-44ec-822f-6441435f2375") : configmap "kube-root-ca.crt" not found Apr 24 23:38:02.969381 kubelet[3614]: I0424 23:38:02.969318 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bxvwj\" (UID: \"005db5f8-31b0-41e8-8d4a-a2214f94ee2f\") " pod="kube-system/cilium-operator-6c4d7847fc-bxvwj" Apr 24 23:38:02.973044 kubelet[3614]: I0424 23:38:02.969388 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj6j9\" (UniqueName: \"kubernetes.io/projected/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-kube-api-access-mj6j9\") pod \"cilium-operator-6c4d7847fc-bxvwj\" (UID: \"005db5f8-31b0-41e8-8d4a-a2214f94ee2f\") " pod="kube-system/cilium-operator-6c4d7847fc-bxvwj" Apr 24 23:38:03.267234 containerd[2128]: time="2026-04-24T23:38:03.267106835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bxvwj,Uid:005db5f8-31b0-41e8-8d4a-a2214f94ee2f,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:03.314832 containerd[2128]: time="2026-04-24T23:38:03.314172263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:03.314832 containerd[2128]: time="2026-04-24T23:38:03.314267099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:03.315166 containerd[2128]: time="2026-04-24T23:38:03.314565347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:03.315166 containerd[2128]: time="2026-04-24T23:38:03.314749223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:03.341261 containerd[2128]: time="2026-04-24T23:38:03.340721675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xrf2k,Uid:e109d8e6-7be4-45b9-8532-52f834d39534,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:03.364406 containerd[2128]: time="2026-04-24T23:38:03.363776003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdhg7,Uid:7169284b-7636-44ec-822f-6441435f2375,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:03.416224 containerd[2128]: time="2026-04-24T23:38:03.415759008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:03.416224 containerd[2128]: time="2026-04-24T23:38:03.415890084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:03.416224 containerd[2128]: time="2026-04-24T23:38:03.415944156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:03.416224 containerd[2128]: time="2026-04-24T23:38:03.416136156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:03.432237 containerd[2128]: time="2026-04-24T23:38:03.432118824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bxvwj,Uid:005db5f8-31b0-41e8-8d4a-a2214f94ee2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26\"" Apr 24 23:38:03.436416 containerd[2128]: time="2026-04-24T23:38:03.436132176Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 24 23:38:03.448576 containerd[2128]: time="2026-04-24T23:38:03.448341816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:03.448726 containerd[2128]: time="2026-04-24T23:38:03.448554180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:03.448988 containerd[2128]: time="2026-04-24T23:38:03.448702824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:03.450499 containerd[2128]: time="2026-04-24T23:38:03.450280044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:03.532872 containerd[2128]: time="2026-04-24T23:38:03.532562064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xrf2k,Uid:e109d8e6-7be4-45b9-8532-52f834d39534,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f3af4e7c6bce7ec077f6133070e0bb8a0a4ca8efe61a0d61d005592d7a62ab\"" Apr 24 23:38:03.545628 containerd[2128]: time="2026-04-24T23:38:03.545579400Z" level=info msg="CreateContainer within sandbox \"a9f3af4e7c6bce7ec077f6133070e0bb8a0a4ca8efe61a0d61d005592d7a62ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:38:03.550874 containerd[2128]: time="2026-04-24T23:38:03.550666644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdhg7,Uid:7169284b-7636-44ec-822f-6441435f2375,Namespace:kube-system,Attempt:0,} returns sandbox id \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\"" Apr 24 23:38:03.592249 containerd[2128]: time="2026-04-24T23:38:03.588851808Z" level=info msg="CreateContainer within sandbox \"a9f3af4e7c6bce7ec077f6133070e0bb8a0a4ca8efe61a0d61d005592d7a62ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c37aec133b4c6fda7ad2009f132c3b0326ce96999bdbb61d90ac72d28f7d2144\"" Apr 24 23:38:03.593677 containerd[2128]: time="2026-04-24T23:38:03.593614452Z" level=info msg="StartContainer for \"c37aec133b4c6fda7ad2009f132c3b0326ce96999bdbb61d90ac72d28f7d2144\"" Apr 24 23:38:03.660729 systemd[1]: run-containerd-runc-k8s.io-c37aec133b4c6fda7ad2009f132c3b0326ce96999bdbb61d90ac72d28f7d2144-runc.b5aIkv.mount: Deactivated successfully. Apr 24 23:38:03.727560 containerd[2128]: time="2026-04-24T23:38:03.727487725Z" level=info msg="StartContainer for \"c37aec133b4c6fda7ad2009f132c3b0326ce96999bdbb61d90ac72d28f7d2144\" returns successfully" Apr 24 23:38:04.708605 kubelet[3614]: I0424 23:38:04.708510 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xrf2k" podStartSLOduration=2.7084866979999997 podStartE2EDuration="2.708486698s" podCreationTimestamp="2026-04-24 23:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:04.693569894 +0000 UTC m=+8.389916527" watchObservedRunningTime="2026-04-24 23:38:04.708486698 +0000 UTC m=+8.404833331" Apr 24 23:38:04.747040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731525670.mount: Deactivated successfully. Apr 24 23:38:08.574709 containerd[2128]: time="2026-04-24T23:38:08.574620773Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:08.576961 containerd[2128]: time="2026-04-24T23:38:08.576911501Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 24 23:38:08.580344 containerd[2128]: time="2026-04-24T23:38:08.579432317Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:08.582212 containerd[2128]: time="2026-04-24T23:38:08.582160349Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.145924241s" Apr 24 23:38:08.582431 containerd[2128]: time="2026-04-24T23:38:08.582397493Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 24 23:38:08.588684 containerd[2128]: time="2026-04-24T23:38:08.587026361Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 24 23:38:08.596470 containerd[2128]: time="2026-04-24T23:38:08.596417717Z" level=info msg="CreateContainer within sandbox \"69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 24 23:38:08.624961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17355840.mount: Deactivated successfully. Apr 24 23:38:08.635758 containerd[2128]: time="2026-04-24T23:38:08.635696225Z" level=info msg="CreateContainer within sandbox \"69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\"" Apr 24 23:38:08.637981 containerd[2128]: time="2026-04-24T23:38:08.637900733Z" level=info msg="StartContainer for \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\"" Apr 24 23:38:08.688178 systemd[1]: run-containerd-runc-k8s.io-9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7-runc.LHHS1T.mount: Deactivated successfully. Apr 24 23:38:08.759163 containerd[2128]: time="2026-04-24T23:38:08.759071058Z" level=info msg="StartContainer for \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\" returns successfully" Apr 24 23:38:15.031515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130186963.mount: Deactivated successfully. Apr 24 23:38:17.751030 containerd[2128]: time="2026-04-24T23:38:17.749414031Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:17.751914 containerd[2128]: time="2026-04-24T23:38:17.751826523Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 24 23:38:17.753651 containerd[2128]: time="2026-04-24T23:38:17.753611427Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:38:17.758473 containerd[2128]: time="2026-04-24T23:38:17.758390859Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.16913581s" Apr 24 23:38:17.758638 containerd[2128]: time="2026-04-24T23:38:17.758478759Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 24 23:38:17.769181 containerd[2128]: time="2026-04-24T23:38:17.769121535Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 23:38:17.803586 containerd[2128]: time="2026-04-24T23:38:17.803531847Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\"" Apr 24 23:38:17.804818 containerd[2128]: time="2026-04-24T23:38:17.804727947Z" level=info msg="StartContainer for \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\"" Apr 24 23:38:17.861265 systemd[1]: run-containerd-runc-k8s.io-74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a-runc.xVBJwm.mount: Deactivated successfully. Apr 24 23:38:17.914681 containerd[2128]: time="2026-04-24T23:38:17.914517004Z" level=info msg="StartContainer for \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\" returns successfully" Apr 24 23:38:18.791957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a-rootfs.mount: Deactivated successfully. Apr 24 23:38:18.805772 kubelet[3614]: I0424 23:38:18.805682 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bxvwj" podStartSLOduration=11.654626807 podStartE2EDuration="16.805662004s" podCreationTimestamp="2026-04-24 23:38:02 +0000 UTC" firstStartedPulling="2026-04-24 23:38:03.43508214 +0000 UTC m=+7.131428761" lastFinishedPulling="2026-04-24 23:38:08.586117253 +0000 UTC m=+12.282463958" observedRunningTime="2026-04-24 23:38:09.830825191 +0000 UTC m=+13.527171836" watchObservedRunningTime="2026-04-24 23:38:18.805662004 +0000 UTC m=+22.502008637" Apr 24 23:38:18.857326 containerd[2128]: time="2026-04-24T23:38:18.857225908Z" level=info msg="shim disconnected" id=74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a namespace=k8s.io Apr 24 23:38:18.857326 containerd[2128]: time="2026-04-24T23:38:18.857317228Z" level=warning msg="cleaning up after shim disconnected" id=74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a namespace=k8s.io Apr 24 23:38:18.858172 containerd[2128]: time="2026-04-24T23:38:18.857341900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:19.784362 containerd[2128]: time="2026-04-24T23:38:19.784042613Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 23:38:19.820065 containerd[2128]: time="2026-04-24T23:38:19.819167033Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\"" Apr 24 23:38:19.822394 containerd[2128]: time="2026-04-24T23:38:19.821557517Z" level=info msg="StartContainer for \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\"" Apr 24 23:38:19.879736 systemd[1]: run-containerd-runc-k8s.io-886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda-runc.VVDRFC.mount: Deactivated successfully. Apr 24 23:38:19.933258 containerd[2128]: time="2026-04-24T23:38:19.933039846Z" level=info msg="StartContainer for \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\" returns successfully" Apr 24 23:38:19.962410 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:38:19.963045 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:38:19.963162 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:38:19.973928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:38:20.023971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:38:20.029330 containerd[2128]: time="2026-04-24T23:38:20.027171386Z" level=info msg="shim disconnected" id=886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda namespace=k8s.io Apr 24 23:38:20.029330 containerd[2128]: time="2026-04-24T23:38:20.027244490Z" level=warning msg="cleaning up after shim disconnected" id=886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda namespace=k8s.io Apr 24 23:38:20.029330 containerd[2128]: time="2026-04-24T23:38:20.027265286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:20.788800 containerd[2128]: time="2026-04-24T23:38:20.788731926Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 23:38:20.807738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda-rootfs.mount: Deactivated successfully. Apr 24 23:38:20.839149 containerd[2128]: time="2026-04-24T23:38:20.838955922Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\"" Apr 24 23:38:20.840123 containerd[2128]: time="2026-04-24T23:38:20.839989650Z" level=info msg="StartContainer for \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\"" Apr 24 23:38:20.957439 containerd[2128]: time="2026-04-24T23:38:20.956607343Z" level=info msg="StartContainer for \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\" returns successfully" Apr 24 23:38:21.007875 containerd[2128]: time="2026-04-24T23:38:21.007771299Z" level=info msg="shim disconnected" id=29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803 namespace=k8s.io Apr 24 23:38:21.007875 containerd[2128]: time="2026-04-24T23:38:21.007865979Z" level=warning msg="cleaning up after shim disconnected" id=29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803 namespace=k8s.io Apr 24 23:38:21.008240 containerd[2128]: time="2026-04-24T23:38:21.007888899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:21.796983 containerd[2128]: time="2026-04-24T23:38:21.796892359Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 23:38:21.806520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803-rootfs.mount: Deactivated successfully. Apr 24 23:38:21.833969 containerd[2128]: time="2026-04-24T23:38:21.833899207Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\"" Apr 24 23:38:21.837142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount669837803.mount: Deactivated successfully. Apr 24 23:38:21.841044 containerd[2128]: time="2026-04-24T23:38:21.840220339Z" level=info msg="StartContainer for \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\"" Apr 24 23:38:21.946532 containerd[2128]: time="2026-04-24T23:38:21.946460120Z" level=info msg="StartContainer for \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\" returns successfully" Apr 24 23:38:21.988722 containerd[2128]: time="2026-04-24T23:38:21.988637036Z" level=info msg="shim disconnected" id=ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3 namespace=k8s.io Apr 24 23:38:21.988722 containerd[2128]: time="2026-04-24T23:38:21.988712000Z" level=warning msg="cleaning up after shim disconnected" id=ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3 namespace=k8s.io Apr 24 23:38:21.989758 containerd[2128]: time="2026-04-24T23:38:21.988734956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:38:22.807076 containerd[2128]: time="2026-04-24T23:38:22.806994488Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 23:38:22.807537 systemd[1]: run-containerd-runc-k8s.io-ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3-runc.ey1TRd.mount: Deactivated successfully. Apr 24 23:38:22.807867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3-rootfs.mount: Deactivated successfully. Apr 24 23:38:22.851149 containerd[2128]: time="2026-04-24T23:38:22.851070320Z" level=info msg="CreateContainer within sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\"" Apr 24 23:38:22.852752 containerd[2128]: time="2026-04-24T23:38:22.852570944Z" level=info msg="StartContainer for \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\"" Apr 24 23:38:22.974674 containerd[2128]: time="2026-04-24T23:38:22.974489145Z" level=info msg="StartContainer for \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\" returns successfully" Apr 24 23:38:23.147981 kubelet[3614]: I0424 23:38:23.146711 3614 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:38:23.233628 kubelet[3614]: I0424 23:38:23.233569 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lmm\" (UniqueName: \"kubernetes.io/projected/b95990a8-f203-4567-9d83-2665765e060f-kube-api-access-x9lmm\") pod \"coredns-674b8bbfcf-v9ddn\" (UID: \"b95990a8-f203-4567-9d83-2665765e060f\") " pod="kube-system/coredns-674b8bbfcf-v9ddn" Apr 24 23:38:23.234892 kubelet[3614]: I0424 23:38:23.233831 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b95990a8-f203-4567-9d83-2665765e060f-config-volume\") pod \"coredns-674b8bbfcf-v9ddn\" (UID: \"b95990a8-f203-4567-9d83-2665765e060f\") " pod="kube-system/coredns-674b8bbfcf-v9ddn" Apr 24 23:38:23.335586 kubelet[3614]: I0424 23:38:23.335532 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71d95971-fd31-45e7-a37a-2abc402c113f-config-volume\") pod \"coredns-674b8bbfcf-tqj2n\" (UID: \"71d95971-fd31-45e7-a37a-2abc402c113f\") " pod="kube-system/coredns-674b8bbfcf-tqj2n" Apr 24 23:38:23.338881 kubelet[3614]: I0424 23:38:23.336363 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb55k\" (UniqueName: \"kubernetes.io/projected/71d95971-fd31-45e7-a37a-2abc402c113f-kube-api-access-rb55k\") pod \"coredns-674b8bbfcf-tqj2n\" (UID: \"71d95971-fd31-45e7-a37a-2abc402c113f\") " pod="kube-system/coredns-674b8bbfcf-tqj2n" Apr 24 23:38:23.525633 containerd[2128]: time="2026-04-24T23:38:23.525486031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v9ddn,Uid:b95990a8-f203-4567-9d83-2665765e060f,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:23.537256 containerd[2128]: time="2026-04-24T23:38:23.536779831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tqj2n,Uid:71d95971-fd31-45e7-a37a-2abc402c113f,Namespace:kube-system,Attempt:0,}" Apr 24 23:38:23.835232 systemd[1]: run-containerd-runc-k8s.io-623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8-runc.fMmSV5.mount: Deactivated successfully. Apr 24 23:38:25.915342 systemd-networkd[1690]: cilium_host: Link UP Apr 24 23:38:25.916765 (udev-worker)[4456]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:38:25.921343 systemd-networkd[1690]: cilium_net: Link UP Apr 24 23:38:25.921354 systemd-networkd[1690]: cilium_net: Gained carrier Apr 24 23:38:25.921712 systemd-networkd[1690]: cilium_host: Gained carrier Apr 24 23:38:25.922234 systemd-networkd[1690]: cilium_host: Gained IPv6LL Apr 24 23:38:25.923207 (udev-worker)[4421]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:38:26.088241 (udev-worker)[4476]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:38:26.098673 systemd-networkd[1690]: cilium_vxlan: Link UP Apr 24 23:38:26.098689 systemd-networkd[1690]: cilium_vxlan: Gained carrier Apr 24 23:38:26.112458 systemd-networkd[1690]: cilium_net: Gained IPv6LL Apr 24 23:38:26.670512 kernel: NET: Registered PF_ALG protocol family Apr 24 23:38:27.999681 systemd-networkd[1690]: cilium_vxlan: Gained IPv6LL Apr 24 23:38:28.040982 systemd-networkd[1690]: lxc_health: Link UP Apr 24 23:38:28.041563 systemd-networkd[1690]: lxc_health: Gained carrier Apr 24 23:38:28.650242 systemd-networkd[1690]: lxcf3092b25c7cf: Link UP Apr 24 23:38:28.661347 kernel: eth0: renamed from tmp10da7 Apr 24 23:38:28.682136 systemd-networkd[1690]: lxcf3092b25c7cf: Gained carrier Apr 24 23:38:28.728283 systemd-networkd[1690]: lxc61d952046b94: Link UP Apr 24 23:38:28.771394 kernel: eth0: renamed from tmp16cad Apr 24 23:38:28.776549 systemd-networkd[1690]: lxc61d952046b94: Gained carrier Apr 24 23:38:29.404220 kubelet[3614]: I0424 23:38:29.403020 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zdhg7" podStartSLOduration=13.196609366 podStartE2EDuration="27.402999769s" podCreationTimestamp="2026-04-24 23:38:02 +0000 UTC" firstStartedPulling="2026-04-24 23:38:03.553196568 +0000 UTC m=+7.249543201" lastFinishedPulling="2026-04-24 23:38:17.759586983 +0000 UTC m=+21.455933604" observedRunningTime="2026-04-24 23:38:23.866934153 +0000 UTC m=+27.563280954" watchObservedRunningTime="2026-04-24 23:38:29.402999769 +0000 UTC m=+33.099346402" Apr 24 23:38:29.407541 systemd-networkd[1690]: lxc_health: Gained IPv6LL Apr 24 23:38:30.111541 systemd-networkd[1690]: lxc61d952046b94: Gained IPv6LL Apr 24 23:38:30.559515 systemd-networkd[1690]: lxcf3092b25c7cf: Gained IPv6LL Apr 24 23:38:33.094613 ntpd[2083]: Listen normally on 6 cilium_host 192.168.0.45:123 Apr 24 23:38:33.096347 ntpd[2083]: 24 Apr 23:38:33 ntpd[2083]: Listen normally on 6 cilium_host 192.168.0.45:123 Apr 24 23:38:33.096347 ntpd[2083]: 24 Apr 23:38:33 ntpd[2083]: Listen normally on 7 cilium_net [fe80::d409:6fff:fef6:1eb9%4]:123 Apr 24 23:38:33.096347 ntpd[2083]: 24 Apr 23:38:33 ntpd[2083]: Listen normally on 8 cilium_host [fe80::420:c7ff:fe6b:f35c%5]:123 Apr 24 23:38:33.096347 ntpd[2083]: 24 Apr 23:38:33 ntpd[2083]: Listen normally on 9 cilium_vxlan [fe80::c80:37ff:fe30:6116%6]:123 Apr 24 23:38:33.096347 ntpd[2083]: 24 Apr 23:38:33 ntpd[2083]: Listen normally on 10 lxc_health [fe80::e4fc:d3ff:fe6b:cc0a%8]:123 Apr 24 23:38:33.096347 ntpd[2083]: 24 Apr 23:38:33 ntpd[2083]: Listen normally on 11 lxcf3092b25c7cf [fe80::fcb8:b2ff:fe52:b730%10]:123 Apr 24 23:38:33.096347 ntpd[2083]: 24 Apr 23:38:33 ntpd[2083]: Listen normally on 12 lxc61d952046b94 [fe80::9446:d4ff:fe29:d180%12]:123 Apr 24 23:38:33.094755 ntpd[2083]: Listen normally on 7 cilium_net [fe80::d409:6fff:fef6:1eb9%4]:123 Apr 24 23:38:33.094840 ntpd[2083]: Listen normally on 8 cilium_host [fe80::420:c7ff:fe6b:f35c%5]:123 Apr 24 23:38:33.094911 ntpd[2083]: Listen normally on 9 cilium_vxlan [fe80::c80:37ff:fe30:6116%6]:123 Apr 24 23:38:33.094979 ntpd[2083]: Listen normally on 10 lxc_health [fe80::e4fc:d3ff:fe6b:cc0a%8]:123 Apr 24 23:38:33.095047 ntpd[2083]: Listen normally on 11 lxcf3092b25c7cf [fe80::fcb8:b2ff:fe52:b730%10]:123 Apr 24 23:38:33.095124 ntpd[2083]: Listen normally on 12 lxc61d952046b94 [fe80::9446:d4ff:fe29:d180%12]:123 Apr 24 23:38:37.017693 containerd[2128]: time="2026-04-24T23:38:37.017091594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:37.020824 containerd[2128]: time="2026-04-24T23:38:37.019322838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:38:37.025063 containerd[2128]: time="2026-04-24T23:38:37.021583674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:37.025063 containerd[2128]: time="2026-04-24T23:38:37.021643362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:37.025063 containerd[2128]: time="2026-04-24T23:38:37.022548282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:37.035163 containerd[2128]: time="2026-04-24T23:38:37.033386286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:38:37.035163 containerd[2128]: time="2026-04-24T23:38:37.033439062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:37.035163 containerd[2128]: time="2026-04-24T23:38:37.033657126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:38:37.239329 containerd[2128]: time="2026-04-24T23:38:37.237960140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tqj2n,Uid:71d95971-fd31-45e7-a37a-2abc402c113f,Namespace:kube-system,Attempt:0,} returns sandbox id \"16cadbc13fcc673d3dc40366d1bf2998b0831a8473f4524cf593de2584402ed4\"" Apr 24 23:38:37.263358 containerd[2128]: time="2026-04-24T23:38:37.261997580Z" level=info msg="CreateContainer within sandbox \"16cadbc13fcc673d3dc40366d1bf2998b0831a8473f4524cf593de2584402ed4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:38:37.289795 containerd[2128]: time="2026-04-24T23:38:37.289650848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v9ddn,Uid:b95990a8-f203-4567-9d83-2665765e060f,Namespace:kube-system,Attempt:0,} returns sandbox id \"10da758bf5c53e45572b0b0daf993840e3748d5d6a10bbb73f2bd7842782bccb\"" Apr 24 23:38:37.309730 containerd[2128]: time="2026-04-24T23:38:37.307933328Z" level=info msg="CreateContainer within sandbox \"10da758bf5c53e45572b0b0daf993840e3748d5d6a10bbb73f2bd7842782bccb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:38:37.309730 containerd[2128]: time="2026-04-24T23:38:37.309011804Z" level=info msg="CreateContainer within sandbox \"16cadbc13fcc673d3dc40366d1bf2998b0831a8473f4524cf593de2584402ed4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddfef44a9916d13ccc69a56d31c662aa4f39ae1160d324f439c1ff4aab0d800a\"" Apr 24 23:38:37.312999 containerd[2128]: time="2026-04-24T23:38:37.312662804Z" level=info msg="StartContainer for \"ddfef44a9916d13ccc69a56d31c662aa4f39ae1160d324f439c1ff4aab0d800a\"" Apr 24 23:38:37.358320 containerd[2128]: time="2026-04-24T23:38:37.356686688Z" level=info msg="CreateContainer within sandbox \"10da758bf5c53e45572b0b0daf993840e3748d5d6a10bbb73f2bd7842782bccb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f44d4afdd44cbffa3ec19d084376869b9591537eb794a7f20faf480fcd86ae0\"" Apr 24 23:38:37.361919 containerd[2128]: time="2026-04-24T23:38:37.361781864Z" level=info msg="StartContainer for \"7f44d4afdd44cbffa3ec19d084376869b9591537eb794a7f20faf480fcd86ae0\"" Apr 24 23:38:37.487860 containerd[2128]: time="2026-04-24T23:38:37.487807545Z" level=info msg="StartContainer for \"ddfef44a9916d13ccc69a56d31c662aa4f39ae1160d324f439c1ff4aab0d800a\" returns successfully" Apr 24 23:38:37.517172 containerd[2128]: time="2026-04-24T23:38:37.516845925Z" level=info msg="StartContainer for \"7f44d4afdd44cbffa3ec19d084376869b9591537eb794a7f20faf480fcd86ae0\" returns successfully" Apr 24 23:38:37.915803 kubelet[3614]: I0424 23:38:37.915537 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tqj2n" podStartSLOduration=35.915513131 podStartE2EDuration="35.915513131s" podCreationTimestamp="2026-04-24 23:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:37.892992383 +0000 UTC m=+41.589339112" watchObservedRunningTime="2026-04-24 23:38:37.915513131 +0000 UTC m=+41.611859752" Apr 24 23:38:37.952968 kubelet[3614]: I0424 23:38:37.951566 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v9ddn" podStartSLOduration=35.951539831 podStartE2EDuration="35.951539831s" podCreationTimestamp="2026-04-24 23:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:38:37.919127099 +0000 UTC m=+41.615473720" watchObservedRunningTime="2026-04-24 23:38:37.951539831 +0000 UTC m=+41.647886512" Apr 24 23:38:47.521254 systemd[1]: Started sshd@7-172.31.21.128:22-20.229.252.112:41166.service - OpenSSH per-connection server daemon (20.229.252.112:41166). Apr 24 23:38:48.555332 sshd[4996]: Accepted publickey for core from 20.229.252.112 port 41166 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:48.557848 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:48.566064 systemd-logind[2096]: New session 8 of user core. Apr 24 23:38:48.571854 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:38:49.405278 sshd[4996]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:49.413537 systemd-logind[2096]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:38:49.414193 systemd[1]: sshd@7-172.31.21.128:22-20.229.252.112:41166.service: Deactivated successfully. Apr 24 23:38:49.421203 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:38:49.423867 systemd-logind[2096]: Removed session 8. Apr 24 23:38:54.569764 systemd[1]: Started sshd@8-172.31.21.128:22-20.229.252.112:41182.service - OpenSSH per-connection server daemon (20.229.252.112:41182). Apr 24 23:38:55.567949 sshd[5011]: Accepted publickey for core from 20.229.252.112 port 41182 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:38:55.570983 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:55.580124 systemd-logind[2096]: New session 9 of user core. Apr 24 23:38:55.588928 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:38:56.361628 sshd[5011]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:56.369032 systemd[1]: sshd@8-172.31.21.128:22-20.229.252.112:41182.service: Deactivated successfully. Apr 24 23:38:56.376085 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:38:56.378640 systemd-logind[2096]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:38:56.380598 systemd-logind[2096]: Removed session 9. Apr 24 23:39:01.538838 systemd[1]: Started sshd@9-172.31.21.128:22-20.229.252.112:39392.service - OpenSSH per-connection server daemon (20.229.252.112:39392). Apr 24 23:39:02.574959 sshd[5027]: Accepted publickey for core from 20.229.252.112 port 39392 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:02.577808 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:02.585692 systemd-logind[2096]: New session 10 of user core. Apr 24 23:39:02.592078 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:39:03.393729 sshd[5027]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:03.399227 systemd-logind[2096]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:39:03.400944 systemd[1]: sshd@9-172.31.21.128:22-20.229.252.112:39392.service: Deactivated successfully. Apr 24 23:39:03.408097 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:39:03.410217 systemd-logind[2096]: Removed session 10. Apr 24 23:39:08.558779 systemd[1]: Started sshd@10-172.31.21.128:22-20.229.252.112:41420.service - OpenSSH per-connection server daemon (20.229.252.112:41420). Apr 24 23:39:09.551334 sshd[5045]: Accepted publickey for core from 20.229.252.112 port 41420 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:09.553240 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:09.563599 systemd-logind[2096]: New session 11 of user core. Apr 24 23:39:09.569807 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:39:10.343649 sshd[5045]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:10.349158 systemd-logind[2096]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:39:10.351094 systemd[1]: sshd@10-172.31.21.128:22-20.229.252.112:41420.service: Deactivated successfully. Apr 24 23:39:10.358387 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:39:10.362222 systemd-logind[2096]: Removed session 11. Apr 24 23:39:10.523762 systemd[1]: Started sshd@11-172.31.21.128:22-20.229.252.112:41422.service - OpenSSH per-connection server daemon (20.229.252.112:41422). Apr 24 23:39:11.527527 sshd[5060]: Accepted publickey for core from 20.229.252.112 port 41422 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:11.530118 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:11.537547 systemd-logind[2096]: New session 12 of user core. Apr 24 23:39:11.546903 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:39:12.411065 sshd[5060]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:12.418778 systemd[1]: sshd@11-172.31.21.128:22-20.229.252.112:41422.service: Deactivated successfully. Apr 24 23:39:12.426390 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:39:12.428272 systemd-logind[2096]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:39:12.430247 systemd-logind[2096]: Removed session 12. Apr 24 23:39:12.592876 systemd[1]: Started sshd@12-172.31.21.128:22-20.229.252.112:41436.service - OpenSSH per-connection server daemon (20.229.252.112:41436). Apr 24 23:39:13.616884 sshd[5071]: Accepted publickey for core from 20.229.252.112 port 41436 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:13.619495 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:13.629465 systemd-logind[2096]: New session 13 of user core. Apr 24 23:39:13.632847 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:39:14.429598 sshd[5071]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:14.437777 systemd[1]: sshd@12-172.31.21.128:22-20.229.252.112:41436.service: Deactivated successfully. Apr 24 23:39:14.445411 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:39:14.448982 systemd-logind[2096]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:39:14.451168 systemd-logind[2096]: Removed session 13. Apr 24 23:39:19.604128 systemd[1]: Started sshd@13-172.31.21.128:22-20.229.252.112:60262.service - OpenSSH per-connection server daemon (20.229.252.112:60262). Apr 24 23:39:20.644346 sshd[5087]: Accepted publickey for core from 20.229.252.112 port 60262 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:20.646966 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:20.655369 systemd-logind[2096]: New session 14 of user core. Apr 24 23:39:20.667901 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:39:21.477589 sshd[5087]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:21.485650 systemd-logind[2096]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:39:21.486807 systemd[1]: sshd@13-172.31.21.128:22-20.229.252.112:60262.service: Deactivated successfully. Apr 24 23:39:21.492491 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:39:21.494223 systemd-logind[2096]: Removed session 14. Apr 24 23:39:26.655791 systemd[1]: Started sshd@14-172.31.21.128:22-20.229.252.112:45580.service - OpenSSH per-connection server daemon (20.229.252.112:45580). Apr 24 23:39:27.713680 sshd[5100]: Accepted publickey for core from 20.229.252.112 port 45580 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:27.716750 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:27.725527 systemd-logind[2096]: New session 15 of user core. Apr 24 23:39:27.731851 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:39:28.547662 sshd[5100]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:28.555477 systemd[1]: sshd@14-172.31.21.128:22-20.229.252.112:45580.service: Deactivated successfully. Apr 24 23:39:28.562391 systemd-logind[2096]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:39:28.562898 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:39:28.567486 systemd-logind[2096]: Removed session 15. Apr 24 23:39:28.708788 systemd[1]: Started sshd@15-172.31.21.128:22-20.229.252.112:45588.service - OpenSSH per-connection server daemon (20.229.252.112:45588). Apr 24 23:39:29.710102 sshd[5113]: Accepted publickey for core from 20.229.252.112 port 45588 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:29.712799 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:29.722924 systemd-logind[2096]: New session 16 of user core. Apr 24 23:39:29.726950 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:39:30.600710 sshd[5113]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:30.608024 systemd[1]: sshd@15-172.31.21.128:22-20.229.252.112:45588.service: Deactivated successfully. Apr 24 23:39:30.613801 systemd-logind[2096]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:39:30.615155 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:39:30.618745 systemd-logind[2096]: Removed session 16. Apr 24 23:39:30.778752 systemd[1]: Started sshd@16-172.31.21.128:22-20.229.252.112:45600.service - OpenSSH per-connection server daemon (20.229.252.112:45600). Apr 24 23:39:31.809335 sshd[5124]: Accepted publickey for core from 20.229.252.112 port 45600 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:31.811602 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:31.819426 systemd-logind[2096]: New session 17 of user core. Apr 24 23:39:31.830780 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:39:33.341708 sshd[5124]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:33.348337 systemd[1]: sshd@16-172.31.21.128:22-20.229.252.112:45600.service: Deactivated successfully. Apr 24 23:39:33.356668 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:39:33.359791 systemd-logind[2096]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:39:33.361812 systemd-logind[2096]: Removed session 17. Apr 24 23:39:33.517784 systemd[1]: Started sshd@17-172.31.21.128:22-20.229.252.112:45610.service - OpenSSH per-connection server daemon (20.229.252.112:45610). Apr 24 23:39:34.552343 sshd[5143]: Accepted publickey for core from 20.229.252.112 port 45610 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:34.556986 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:34.565558 systemd-logind[2096]: New session 18 of user core. Apr 24 23:39:34.574986 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:39:35.620739 sshd[5143]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:35.630188 systemd[1]: sshd@17-172.31.21.128:22-20.229.252.112:45610.service: Deactivated successfully. Apr 24 23:39:35.631549 systemd-logind[2096]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:39:35.637497 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:39:35.639899 systemd-logind[2096]: Removed session 18. Apr 24 23:39:35.790828 systemd[1]: Started sshd@18-172.31.21.128:22-20.229.252.112:45620.service - OpenSSH per-connection server daemon (20.229.252.112:45620). Apr 24 23:39:36.808433 sshd[5156]: Accepted publickey for core from 20.229.252.112 port 45620 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:36.811149 sshd[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:36.819176 systemd-logind[2096]: New session 19 of user core. Apr 24 23:39:36.824777 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:39:37.637622 sshd[5156]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:37.645197 systemd[1]: sshd@18-172.31.21.128:22-20.229.252.112:45620.service: Deactivated successfully. Apr 24 23:39:37.652713 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:39:37.652960 systemd-logind[2096]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:39:37.658263 systemd-logind[2096]: Removed session 19. Apr 24 23:39:42.802737 systemd[1]: Started sshd@19-172.31.21.128:22-20.229.252.112:48528.service - OpenSSH per-connection server daemon (20.229.252.112:48528). Apr 24 23:39:43.810855 sshd[5172]: Accepted publickey for core from 20.229.252.112 port 48528 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:43.814328 sshd[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:43.823060 systemd-logind[2096]: New session 20 of user core. Apr 24 23:39:43.831809 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:39:44.606394 sshd[5172]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:44.613041 systemd[1]: sshd@19-172.31.21.128:22-20.229.252.112:48528.service: Deactivated successfully. Apr 24 23:39:44.619894 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:39:44.622141 systemd-logind[2096]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:39:44.624429 systemd-logind[2096]: Removed session 20. Apr 24 23:39:49.779769 systemd[1]: Started sshd@20-172.31.21.128:22-20.229.252.112:36260.service - OpenSSH per-connection server daemon (20.229.252.112:36260). Apr 24 23:39:50.809713 sshd[5186]: Accepted publickey for core from 20.229.252.112 port 36260 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:50.812219 sshd[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:50.820763 systemd-logind[2096]: New session 21 of user core. Apr 24 23:39:50.826796 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:39:51.619577 sshd[5186]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:51.624348 systemd-logind[2096]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:39:51.625563 systemd[1]: sshd@20-172.31.21.128:22-20.229.252.112:36260.service: Deactivated successfully. Apr 24 23:39:51.635369 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:39:51.640071 systemd-logind[2096]: Removed session 21. Apr 24 23:39:51.793048 systemd[1]: Started sshd@21-172.31.21.128:22-20.229.252.112:36262.service - OpenSSH per-connection server daemon (20.229.252.112:36262). Apr 24 23:39:52.797335 sshd[5199]: Accepted publickey for core from 20.229.252.112 port 36262 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:52.799593 sshd[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:52.807793 systemd-logind[2096]: New session 22 of user core. Apr 24 23:39:52.812826 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 23:39:56.449655 containerd[2128]: time="2026-04-24T23:39:56.449080477Z" level=info msg="StopContainer for \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\" with timeout 30 (s)" Apr 24 23:39:56.460589 containerd[2128]: time="2026-04-24T23:39:56.460441465Z" level=info msg="Stop container \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\" with signal terminated" Apr 24 23:39:56.461138 systemd[1]: run-containerd-runc-k8s.io-623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8-runc.xqk1rO.mount: Deactivated successfully. Apr 24 23:39:56.486717 containerd[2128]: time="2026-04-24T23:39:56.486582661Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:39:56.511234 containerd[2128]: time="2026-04-24T23:39:56.511091281Z" level=info msg="StopContainer for \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\" with timeout 2 (s)" Apr 24 23:39:56.513806 containerd[2128]: time="2026-04-24T23:39:56.513657001Z" level=info msg="Stop container \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\" with signal terminated" Apr 24 23:39:56.532838 systemd-networkd[1690]: lxc_health: Link DOWN Apr 24 23:39:56.532863 systemd-networkd[1690]: lxc_health: Lost carrier Apr 24 23:39:56.574685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7-rootfs.mount: Deactivated successfully. Apr 24 23:39:56.600178 containerd[2128]: time="2026-04-24T23:39:56.599847062Z" level=info msg="shim disconnected" id=9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7 namespace=k8s.io Apr 24 23:39:56.600178 containerd[2128]: time="2026-04-24T23:39:56.599934182Z" level=warning msg="cleaning up after shim disconnected" id=9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7 namespace=k8s.io Apr 24 23:39:56.600178 containerd[2128]: time="2026-04-24T23:39:56.599958110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:56.622908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8-rootfs.mount: Deactivated successfully. Apr 24 23:39:56.638820 containerd[2128]: time="2026-04-24T23:39:56.638702942Z" level=info msg="shim disconnected" id=623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8 namespace=k8s.io Apr 24 23:39:56.638820 containerd[2128]: time="2026-04-24T23:39:56.638801978Z" level=warning msg="cleaning up after shim disconnected" id=623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8 namespace=k8s.io Apr 24 23:39:56.639162 containerd[2128]: time="2026-04-24T23:39:56.638824994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:56.639905 containerd[2128]: time="2026-04-24T23:39:56.639601610Z" level=info msg="StopContainer for \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\" returns successfully" Apr 24 23:39:56.641328 containerd[2128]: time="2026-04-24T23:39:56.640862606Z" level=info msg="StopPodSandbox for \"69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26\"" Apr 24 23:39:56.641328 containerd[2128]: time="2026-04-24T23:39:56.640978094Z" level=info msg="Container to stop \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:39:56.649739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26-shm.mount: Deactivated successfully. Apr 24 23:39:56.697449 containerd[2128]: time="2026-04-24T23:39:56.697371938Z" level=info msg="StopContainer for \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\" returns successfully" Apr 24 23:39:56.699519 containerd[2128]: time="2026-04-24T23:39:56.699118682Z" level=info msg="StopPodSandbox for \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\"" Apr 24 23:39:56.699519 containerd[2128]: time="2026-04-24T23:39:56.699202154Z" level=info msg="Container to stop \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:39:56.699519 containerd[2128]: time="2026-04-24T23:39:56.699247034Z" level=info msg="Container to stop \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:39:56.699519 containerd[2128]: time="2026-04-24T23:39:56.699271610Z" level=info msg="Container to stop \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:39:56.699519 containerd[2128]: time="2026-04-24T23:39:56.699328586Z" level=info msg="Container to stop \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:39:56.699519 containerd[2128]: time="2026-04-24T23:39:56.699355250Z" level=info msg="Container to stop \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 23:39:56.734304 containerd[2128]: time="2026-04-24T23:39:56.733044086Z" level=info msg="shim disconnected" id=69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26 namespace=k8s.io Apr 24 23:39:56.734304 containerd[2128]: time="2026-04-24T23:39:56.733619342Z" level=warning msg="cleaning up after shim disconnected" id=69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26 namespace=k8s.io Apr 24 23:39:56.734304 containerd[2128]: time="2026-04-24T23:39:56.733649582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:56.775449 containerd[2128]: time="2026-04-24T23:39:56.774460467Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:39:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:39:56.777280 containerd[2128]: time="2026-04-24T23:39:56.777211863Z" level=info msg="TearDown network for sandbox \"69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26\" successfully" Apr 24 23:39:56.777280 containerd[2128]: time="2026-04-24T23:39:56.777265443Z" level=info msg="StopPodSandbox for \"69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26\" returns successfully" Apr 24 23:39:56.796148 kubelet[3614]: E0424 23:39:56.795375 3614 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 24 23:39:56.800059 containerd[2128]: time="2026-04-24T23:39:56.798155859Z" level=info msg="shim disconnected" id=90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9 namespace=k8s.io Apr 24 23:39:56.800059 containerd[2128]: time="2026-04-24T23:39:56.798269475Z" level=warning msg="cleaning up after shim disconnected" id=90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9 namespace=k8s.io Apr 24 23:39:56.800059 containerd[2128]: time="2026-04-24T23:39:56.798322563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:56.828669 containerd[2128]: time="2026-04-24T23:39:56.828595839Z" level=info msg="TearDown network for sandbox \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" successfully" Apr 24 23:39:56.828669 containerd[2128]: time="2026-04-24T23:39:56.828653439Z" level=info msg="StopPodSandbox for \"90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9\" returns successfully" Apr 24 23:39:56.916473 kubelet[3614]: I0424 23:39:56.916424 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7169284b-7636-44ec-822f-6441435f2375-clustermesh-secrets\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.917329 kubelet[3614]: I0424 23:39:56.917223 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-cgroup\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918321 kubelet[3614]: I0424 23:39:56.917501 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8b2c\" (UniqueName: \"kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-kube-api-access-n8b2c\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918321 kubelet[3614]: I0424 23:39:56.917554 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-hostproc\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918321 kubelet[3614]: I0424 23:39:56.917591 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-xtables-lock\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918321 kubelet[3614]: I0424 23:39:56.917624 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-net\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918321 kubelet[3614]: I0424 23:39:56.917664 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj6j9\" (UniqueName: \"kubernetes.io/projected/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-kube-api-access-mj6j9\") pod \"005db5f8-31b0-41e8-8d4a-a2214f94ee2f\" (UID: \"005db5f8-31b0-41e8-8d4a-a2214f94ee2f\") " Apr 24 23:39:56.918321 kubelet[3614]: I0424 23:39:56.917701 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7169284b-7636-44ec-822f-6441435f2375-cilium-config-path\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918699 kubelet[3614]: I0424 23:39:56.917732 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cni-path\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918699 kubelet[3614]: I0424 23:39:56.917767 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-cilium-config-path\") pod \"005db5f8-31b0-41e8-8d4a-a2214f94ee2f\" (UID: \"005db5f8-31b0-41e8-8d4a-a2214f94ee2f\") " Apr 24 23:39:56.918699 kubelet[3614]: I0424 23:39:56.917801 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-lib-modules\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918699 kubelet[3614]: I0424 23:39:56.917835 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-bpf-maps\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918699 kubelet[3614]: I0424 23:39:56.917864 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-etc-cni-netd\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.918699 kubelet[3614]: I0424 23:39:56.917903 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-kernel\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.919104 kubelet[3614]: I0424 23:39:56.917940 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-run\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.919104 kubelet[3614]: I0424 23:39:56.917980 3614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-hubble-tls\") pod \"7169284b-7636-44ec-822f-6441435f2375\" (UID: \"7169284b-7636-44ec-822f-6441435f2375\") " Apr 24 23:39:56.920246 kubelet[3614]: I0424 23:39:56.920181 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.921347 kubelet[3614]: I0424 23:39:56.920882 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-hostproc" (OuterVolumeSpecName: "hostproc") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.922430 kubelet[3614]: I0424 23:39:56.920969 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.925560 kubelet[3614]: I0424 23:39:56.921058 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.925560 kubelet[3614]: I0424 23:39:56.923590 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cni-path" (OuterVolumeSpecName: "cni-path") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.927427 kubelet[3614]: I0424 23:39:56.927016 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.930716 kubelet[3614]: I0424 23:39:56.928009 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.932041 kubelet[3614]: I0424 23:39:56.930343 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.932574 kubelet[3614]: I0424 23:39:56.930435 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.933273 kubelet[3614]: I0424 23:39:56.931971 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7169284b-7636-44ec-822f-6441435f2375-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:39:56.933490 kubelet[3614]: I0424 23:39:56.933435 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-kube-api-access-mj6j9" (OuterVolumeSpecName: "kube-api-access-mj6j9") pod "005db5f8-31b0-41e8-8d4a-a2214f94ee2f" (UID: "005db5f8-31b0-41e8-8d4a-a2214f94ee2f"). InnerVolumeSpecName "kube-api-access-mj6j9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:39:56.933615 kubelet[3614]: I0424 23:39:56.933505 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 23:39:56.936995 kubelet[3614]: I0424 23:39:56.936885 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-kube-api-access-n8b2c" (OuterVolumeSpecName: "kube-api-access-n8b2c") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "kube-api-access-n8b2c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:39:56.937666 kubelet[3614]: I0424 23:39:56.937614 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:39:56.940153 kubelet[3614]: I0424 23:39:56.940111 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7169284b-7636-44ec-822f-6441435f2375-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7169284b-7636-44ec-822f-6441435f2375" (UID: "7169284b-7636-44ec-822f-6441435f2375"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:39:56.942076 kubelet[3614]: I0424 23:39:56.941996 3614 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "005db5f8-31b0-41e8-8d4a-a2214f94ee2f" (UID: "005db5f8-31b0-41e8-8d4a-a2214f94ee2f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018461 3614 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-hubble-tls\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018503 3614 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7169284b-7636-44ec-822f-6441435f2375-clustermesh-secrets\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018529 3614 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-cgroup\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018554 3614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n8b2c\" (UniqueName: \"kubernetes.io/projected/7169284b-7636-44ec-822f-6441435f2375-kube-api-access-n8b2c\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018575 3614 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-hostproc\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018596 3614 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-xtables-lock\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018617 3614 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-net\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.018968 kubelet[3614]: I0424 23:39:57.018666 3614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mj6j9\" (UniqueName: \"kubernetes.io/projected/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-kube-api-access-mj6j9\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018687 3614 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7169284b-7636-44ec-822f-6441435f2375-cilium-config-path\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018708 3614 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cni-path\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018729 3614 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005db5f8-31b0-41e8-8d4a-a2214f94ee2f-cilium-config-path\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018750 3614 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-lib-modules\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018771 3614 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-bpf-maps\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018792 3614 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-etc-cni-netd\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018813 3614 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-host-proc-sys-kernel\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.019511 kubelet[3614]: I0424 23:39:57.018833 3614 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7169284b-7636-44ec-822f-6441435f2375-cilium-run\") on node \"ip-172-31-21-128\" DevicePath \"\"" Apr 24 23:39:57.079820 kubelet[3614]: I0424 23:39:57.079779 3614 scope.go:117] "RemoveContainer" containerID="623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8" Apr 24 23:39:57.089252 containerd[2128]: time="2026-04-24T23:39:57.088787964Z" level=info msg="RemoveContainer for \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\"" Apr 24 23:39:57.103319 containerd[2128]: time="2026-04-24T23:39:57.102498576Z" level=info msg="RemoveContainer for \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\" returns successfully" Apr 24 23:39:57.103521 kubelet[3614]: I0424 23:39:57.102922 3614 scope.go:117] "RemoveContainer" containerID="ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3" Apr 24 23:39:57.105113 containerd[2128]: time="2026-04-24T23:39:57.105050688Z" level=info msg="RemoveContainer for \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\"" Apr 24 23:39:57.119089 containerd[2128]: time="2026-04-24T23:39:57.119021412Z" level=info msg="RemoveContainer for \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\" returns successfully" Apr 24 23:39:57.121198 kubelet[3614]: I0424 23:39:57.121036 3614 scope.go:117] "RemoveContainer" containerID="29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803" Apr 24 23:39:57.123834 containerd[2128]: time="2026-04-24T23:39:57.123772656Z" level=info msg="RemoveContainer for \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\"" Apr 24 23:39:57.130520 containerd[2128]: time="2026-04-24T23:39:57.130413612Z" level=info msg="RemoveContainer for \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\" returns successfully" Apr 24 23:39:57.131721 kubelet[3614]: I0424 23:39:57.131130 3614 scope.go:117] "RemoveContainer" containerID="886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda" Apr 24 23:39:57.134371 containerd[2128]: time="2026-04-24T23:39:57.134323296Z" level=info msg="RemoveContainer for \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\"" Apr 24 23:39:57.141458 containerd[2128]: time="2026-04-24T23:39:57.141380868Z" level=info msg="RemoveContainer for \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\" returns successfully" Apr 24 23:39:57.142357 kubelet[3614]: I0424 23:39:57.141857 3614 scope.go:117] "RemoveContainer" containerID="74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a" Apr 24 23:39:57.143636 containerd[2128]: time="2026-04-24T23:39:57.143591028Z" level=info msg="RemoveContainer for \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\"" Apr 24 23:39:57.150147 containerd[2128]: time="2026-04-24T23:39:57.149809020Z" level=info msg="RemoveContainer for \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\" returns successfully" Apr 24 23:39:57.150332 kubelet[3614]: I0424 23:39:57.150122 3614 scope.go:117] "RemoveContainer" containerID="623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8" Apr 24 23:39:57.151177 containerd[2128]: time="2026-04-24T23:39:57.150728604Z" level=error msg="ContainerStatus for \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\": not found" Apr 24 23:39:57.151337 kubelet[3614]: E0424 23:39:57.150947 3614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\": not found" containerID="623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8" Apr 24 23:39:57.151337 kubelet[3614]: I0424 23:39:57.150996 3614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8"} err="failed to get container status \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"623e732be6b7b857b1c4342789e0f1680e65c2218f78d5083d900529291a67d8\": not found" Apr 24 23:39:57.151337 kubelet[3614]: I0424 23:39:57.151057 3614 scope.go:117] "RemoveContainer" containerID="ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3" Apr 24 23:39:57.151537 containerd[2128]: time="2026-04-24T23:39:57.151438872Z" level=error msg="ContainerStatus for \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\": not found" Apr 24 23:39:57.151696 kubelet[3614]: E0424 23:39:57.151624 3614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\": not found" containerID="ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3" Apr 24 23:39:57.151778 kubelet[3614]: I0424 23:39:57.151689 3614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3"} err="failed to get container status \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca3654fd3f4e16d9f0899b8c36d312458d6136726f39f417a63c7bab7e822ab3\": not found" Apr 24 23:39:57.151778 kubelet[3614]: I0424 23:39:57.151730 3614 scope.go:117] "RemoveContainer" containerID="29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803" Apr 24 23:39:57.152337 containerd[2128]: time="2026-04-24T23:39:57.152021004Z" level=error msg="ContainerStatus for \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\": not found" Apr 24 23:39:57.152453 kubelet[3614]: E0424 23:39:57.152371 3614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\": not found" containerID="29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803" Apr 24 23:39:57.152453 kubelet[3614]: I0424 23:39:57.152408 3614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803"} err="failed to get container status \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\": rpc error: code = NotFound desc = an error occurred when try to find container \"29a99647ddab79fa417b8917707b6f1d548596d152aa0bcb0868b784163f2803\": not found" Apr 24 23:39:57.152453 kubelet[3614]: I0424 23:39:57.152437 3614 scope.go:117] "RemoveContainer" containerID="886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda" Apr 24 23:39:57.152929 containerd[2128]: time="2026-04-24T23:39:57.152732628Z" level=error msg="ContainerStatus for \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\": not found" Apr 24 23:39:57.153039 kubelet[3614]: E0424 23:39:57.152915 3614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\": not found" containerID="886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda" Apr 24 23:39:57.153039 kubelet[3614]: I0424 23:39:57.152954 3614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda"} err="failed to get container status \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\": rpc error: code = NotFound desc = an error occurred when try to find container \"886915a092105852cf1b479873ffe398f25b0e37642bd513c00aa5ea4d2c0eda\": not found" Apr 24 23:39:57.153039 kubelet[3614]: I0424 23:39:57.152981 3614 scope.go:117] "RemoveContainer" containerID="74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a" Apr 24 23:39:57.153901 containerd[2128]: time="2026-04-24T23:39:57.153506364Z" level=error msg="ContainerStatus for \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\": not found" Apr 24 23:39:57.154008 kubelet[3614]: E0424 23:39:57.153718 3614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\": not found" containerID="74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a" Apr 24 23:39:57.154008 kubelet[3614]: I0424 23:39:57.153756 3614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a"} err="failed to get container status \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\": rpc error: code = NotFound desc = an error occurred when try to find container \"74fbc0c733582ee52c57958f387549ba5866e757a432c884a52072379e77c90a\": not found" Apr 24 23:39:57.154008 kubelet[3614]: I0424 23:39:57.153783 3614 scope.go:117] "RemoveContainer" containerID="9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7" Apr 24 23:39:57.155635 containerd[2128]: time="2026-04-24T23:39:57.155574444Z" level=info msg="RemoveContainer for \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\"" Apr 24 23:39:57.161461 containerd[2128]: time="2026-04-24T23:39:57.161390280Z" level=info msg="RemoveContainer for \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\" returns successfully" Apr 24 23:39:57.161877 kubelet[3614]: I0424 23:39:57.161795 3614 scope.go:117] "RemoveContainer" containerID="9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7" Apr 24 23:39:57.162668 containerd[2128]: time="2026-04-24T23:39:57.162334752Z" level=error msg="ContainerStatus for \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\": not found" Apr 24 23:39:57.162848 kubelet[3614]: E0424 23:39:57.162559 3614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\": not found" containerID="9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7" Apr 24 23:39:57.162848 kubelet[3614]: I0424 23:39:57.162604 3614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7"} err="failed to get container status \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bc8d831b4e8b6eeb87eafca865e85733c583ca0604e544d741decb8f082bcd7\": not found" Apr 24 23:39:57.448738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9-rootfs.mount: Deactivated successfully. Apr 24 23:39:57.449022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90b020bdb3fa553ad4be8ac87db323607ae61cb8004822eaf9bf788dd9be6ff9-shm.mount: Deactivated successfully. Apr 24 23:39:57.449268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69f3b9ff93ee3af3c92471fecbb18366f437877e376d278f022790fde13a7a26-rootfs.mount: Deactivated successfully. Apr 24 23:39:57.449524 systemd[1]: var-lib-kubelet-pods-7169284b\x2d7636\x2d44ec\x2d822f\x2d6441435f2375-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8b2c.mount: Deactivated successfully. Apr 24 23:39:57.449771 systemd[1]: var-lib-kubelet-pods-005db5f8\x2d31b0\x2d41e8\x2d8d4a\x2da2214f94ee2f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmj6j9.mount: Deactivated successfully. Apr 24 23:39:57.450006 systemd[1]: var-lib-kubelet-pods-7169284b\x2d7636\x2d44ec\x2d822f\x2d6441435f2375-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 24 23:39:57.450252 systemd[1]: var-lib-kubelet-pods-7169284b\x2d7636\x2d44ec\x2d822f\x2d6441435f2375-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 24 23:39:58.495009 sshd[5199]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:58.500847 systemd-logind[2096]: Session 22 logged out. Waiting for processes to exit. Apr 24 23:39:58.504257 systemd[1]: sshd@21-172.31.21.128:22-20.229.252.112:36262.service: Deactivated successfully. Apr 24 23:39:58.510091 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 23:39:58.512976 systemd-logind[2096]: Removed session 22. Apr 24 23:39:58.596711 kubelet[3614]: I0424 23:39:58.596642 3614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005db5f8-31b0-41e8-8d4a-a2214f94ee2f" path="/var/lib/kubelet/pods/005db5f8-31b0-41e8-8d4a-a2214f94ee2f/volumes" Apr 24 23:39:58.598261 kubelet[3614]: I0424 23:39:58.598187 3614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7169284b-7636-44ec-822f-6441435f2375" path="/var/lib/kubelet/pods/7169284b-7636-44ec-822f-6441435f2375/volumes" Apr 24 23:39:58.655825 systemd[1]: Started sshd@22-172.31.21.128:22-20.229.252.112:52550.service - OpenSSH per-connection server daemon (20.229.252.112:52550). Apr 24 23:39:59.094593 ntpd[2083]: Deleting interface #10 lxc_health, fe80::e4fc:d3ff:fe6b:cc0a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Apr 24 23:39:59.095183 ntpd[2083]: 24 Apr 23:39:59 ntpd[2083]: Deleting interface #10 lxc_health, fe80::e4fc:d3ff:fe6b:cc0a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Apr 24 23:39:59.216792 kubelet[3614]: I0424 23:39:59.216583 3614 setters.go:618] "Node became not ready" node="ip-172-31-21-128" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-24T23:39:59Z","lastTransitionTime":"2026-04-24T23:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 24 23:39:59.661403 sshd[5368]: Accepted publickey for core from 20.229.252.112 port 52550 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:39:59.663947 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:39:59.671144 systemd-logind[2096]: New session 23 of user core. Apr 24 23:39:59.681650 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 23:40:00.592884 kubelet[3614]: E0424 23:40:00.592705 3614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-tqj2n" podUID="71d95971-fd31-45e7-a37a-2abc402c113f" Apr 24 23:40:01.797388 kubelet[3614]: E0424 23:40:01.797178 3614 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 24 23:40:02.252957 kubelet[3614]: I0424 23:40:02.252739 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-bpf-maps\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.252957 kubelet[3614]: I0424 23:40:02.252805 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-cni-path\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.252957 kubelet[3614]: I0424 23:40:02.252846 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-host-proc-sys-kernel\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.252957 kubelet[3614]: I0424 23:40:02.252881 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e17987a-6201-4867-a9c5-040da7e7c959-hubble-tls\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.252957 kubelet[3614]: I0424 23:40:02.252924 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-etc-cni-netd\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253356 kubelet[3614]: I0424 23:40:02.252977 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-xtables-lock\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253356 kubelet[3614]: I0424 23:40:02.253037 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e17987a-6201-4867-a9c5-040da7e7c959-cilium-config-path\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253356 kubelet[3614]: I0424 23:40:02.253085 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-cilium-cgroup\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253356 kubelet[3614]: I0424 23:40:02.253139 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-host-proc-sys-net\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253356 kubelet[3614]: I0424 23:40:02.253196 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-cilium-run\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253356 kubelet[3614]: I0424 23:40:02.253231 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-lib-modules\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253664 kubelet[3614]: I0424 23:40:02.253265 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e17987a-6201-4867-a9c5-040da7e7c959-clustermesh-secrets\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253664 kubelet[3614]: I0424 23:40:02.253345 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcft5\" (UniqueName: \"kubernetes.io/projected/6e17987a-6201-4867-a9c5-040da7e7c959-kube-api-access-zcft5\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253664 kubelet[3614]: I0424 23:40:02.253380 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e17987a-6201-4867-a9c5-040da7e7c959-hostproc\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.253664 kubelet[3614]: I0424 23:40:02.253430 3614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e17987a-6201-4867-a9c5-040da7e7c959-cilium-ipsec-secrets\") pod \"cilium-sktd6\" (UID: \"6e17987a-6201-4867-a9c5-040da7e7c959\") " pod="kube-system/cilium-sktd6" Apr 24 23:40:02.287440 sshd[5368]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:02.295398 systemd-logind[2096]: Session 23 logged out. Waiting for processes to exit. Apr 24 23:40:02.295967 systemd[1]: sshd@22-172.31.21.128:22-20.229.252.112:52550.service: Deactivated successfully. Apr 24 23:40:02.305791 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 23:40:02.311116 systemd-logind[2096]: Removed session 23. Apr 24 23:40:02.450091 containerd[2128]: time="2026-04-24T23:40:02.450040195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sktd6,Uid:6e17987a-6201-4867-a9c5-040da7e7c959,Namespace:kube-system,Attempt:0,}" Apr 24 23:40:02.462780 systemd[1]: Started sshd@23-172.31.21.128:22-20.229.252.112:52566.service - OpenSSH per-connection server daemon (20.229.252.112:52566). Apr 24 23:40:02.503396 containerd[2128]: time="2026-04-24T23:40:02.503115955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:40:02.503396 containerd[2128]: time="2026-04-24T23:40:02.503233711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:40:02.503756 containerd[2128]: time="2026-04-24T23:40:02.503659303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:02.504989 containerd[2128]: time="2026-04-24T23:40:02.504781795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:40:02.583355 containerd[2128]: time="2026-04-24T23:40:02.583234351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sktd6,Uid:6e17987a-6201-4867-a9c5-040da7e7c959,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\"" Apr 24 23:40:02.596043 kubelet[3614]: E0424 23:40:02.595045 3614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-tqj2n" podUID="71d95971-fd31-45e7-a37a-2abc402c113f" Apr 24 23:40:02.597796 containerd[2128]: time="2026-04-24T23:40:02.597476395Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 23:40:02.620410 containerd[2128]: time="2026-04-24T23:40:02.620346572Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"50556c70bdefdfc31058edeb310bc4fa57bce94a7f7f3d3c7f96773bb4b3a516\"" Apr 24 23:40:02.622183 containerd[2128]: time="2026-04-24T23:40:02.622114076Z" level=info msg="StartContainer for \"50556c70bdefdfc31058edeb310bc4fa57bce94a7f7f3d3c7f96773bb4b3a516\"" Apr 24 23:40:02.712390 containerd[2128]: time="2026-04-24T23:40:02.712195628Z" level=info msg="StartContainer for \"50556c70bdefdfc31058edeb310bc4fa57bce94a7f7f3d3c7f96773bb4b3a516\" returns successfully" Apr 24 23:40:02.786105 containerd[2128]: time="2026-04-24T23:40:02.785902988Z" level=info msg="shim disconnected" id=50556c70bdefdfc31058edeb310bc4fa57bce94a7f7f3d3c7f96773bb4b3a516 namespace=k8s.io Apr 24 23:40:02.786105 containerd[2128]: time="2026-04-24T23:40:02.785980964Z" level=warning msg="cleaning up after shim disconnected" id=50556c70bdefdfc31058edeb310bc4fa57bce94a7f7f3d3c7f96773bb4b3a516 namespace=k8s.io Apr 24 23:40:02.786105 containerd[2128]: time="2026-04-24T23:40:02.786003200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:03.125515 containerd[2128]: time="2026-04-24T23:40:03.124307598Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 23:40:03.149727 containerd[2128]: time="2026-04-24T23:40:03.149395182Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4bf3cf743f5097eb2587aae94e9df94b3e947466c7386488eb61779f414bc727\"" Apr 24 23:40:03.152351 containerd[2128]: time="2026-04-24T23:40:03.151493754Z" level=info msg="StartContainer for \"4bf3cf743f5097eb2587aae94e9df94b3e947466c7386488eb61779f414bc727\"" Apr 24 23:40:03.269664 containerd[2128]: time="2026-04-24T23:40:03.269062075Z" level=info msg="StartContainer for \"4bf3cf743f5097eb2587aae94e9df94b3e947466c7386488eb61779f414bc727\" returns successfully" Apr 24 23:40:03.325341 containerd[2128]: time="2026-04-24T23:40:03.325125343Z" level=info msg="shim disconnected" id=4bf3cf743f5097eb2587aae94e9df94b3e947466c7386488eb61779f414bc727 namespace=k8s.io Apr 24 23:40:03.325341 containerd[2128]: time="2026-04-24T23:40:03.325202935Z" level=warning msg="cleaning up after shim disconnected" id=4bf3cf743f5097eb2587aae94e9df94b3e947466c7386488eb61779f414bc727 namespace=k8s.io Apr 24 23:40:03.325341 containerd[2128]: time="2026-04-24T23:40:03.325222843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:03.487693 sshd[5384]: Accepted publickey for core from 20.229.252.112 port 52566 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:40:03.490396 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:03.498122 systemd-logind[2096]: New session 24 of user core. Apr 24 23:40:03.511079 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 23:40:04.130355 containerd[2128]: time="2026-04-24T23:40:04.130222519Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 23:40:04.178928 containerd[2128]: time="2026-04-24T23:40:04.178737727Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d20a1d9766c898e1f0d705ea4949676755c484851338827c54643f6c4a0fa560\"" Apr 24 23:40:04.182873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863436308.mount: Deactivated successfully. Apr 24 23:40:04.184164 containerd[2128]: time="2026-04-24T23:40:04.183400411Z" level=info msg="StartContainer for \"d20a1d9766c898e1f0d705ea4949676755c484851338827c54643f6c4a0fa560\"" Apr 24 23:40:04.183183 sshd[5384]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:04.210928 systemd[1]: sshd@23-172.31.21.128:22-20.229.252.112:52566.service: Deactivated successfully. Apr 24 23:40:04.232478 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 23:40:04.237961 systemd-logind[2096]: Session 24 logged out. Waiting for processes to exit. Apr 24 23:40:04.247326 systemd-logind[2096]: Removed session 24. Apr 24 23:40:04.369006 systemd[1]: Started sshd@24-172.31.21.128:22-20.229.252.112:52576.service - OpenSSH per-connection server daemon (20.229.252.112:52576). Apr 24 23:40:04.435687 containerd[2128]: time="2026-04-24T23:40:04.435632709Z" level=info msg="StartContainer for \"d20a1d9766c898e1f0d705ea4949676755c484851338827c54643f6c4a0fa560\" returns successfully" Apr 24 23:40:04.478028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d20a1d9766c898e1f0d705ea4949676755c484851338827c54643f6c4a0fa560-rootfs.mount: Deactivated successfully. Apr 24 23:40:04.488826 containerd[2128]: time="2026-04-24T23:40:04.488751885Z" level=info msg="shim disconnected" id=d20a1d9766c898e1f0d705ea4949676755c484851338827c54643f6c4a0fa560 namespace=k8s.io Apr 24 23:40:04.489368 containerd[2128]: time="2026-04-24T23:40:04.489093045Z" level=warning msg="cleaning up after shim disconnected" id=d20a1d9766c898e1f0d705ea4949676755c484851338827c54643f6c4a0fa560 namespace=k8s.io Apr 24 23:40:04.489368 containerd[2128]: time="2026-04-24T23:40:04.489121677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:04.596174 kubelet[3614]: E0424 23:40:04.595832 3614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-tqj2n" podUID="71d95971-fd31-45e7-a37a-2abc402c113f" Apr 24 23:40:05.137073 containerd[2128]: time="2026-04-24T23:40:05.137002472Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 23:40:05.170541 containerd[2128]: time="2026-04-24T23:40:05.170462432Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ae99afb3a702ac2bf1f45f28d8d6420ac22906750e01ff184365e2257503bdee\"" Apr 24 23:40:05.172748 containerd[2128]: time="2026-04-24T23:40:05.172669892Z" level=info msg="StartContainer for \"ae99afb3a702ac2bf1f45f28d8d6420ac22906750e01ff184365e2257503bdee\"" Apr 24 23:40:05.297574 containerd[2128]: time="2026-04-24T23:40:05.296735361Z" level=info msg="StartContainer for \"ae99afb3a702ac2bf1f45f28d8d6420ac22906750e01ff184365e2257503bdee\" returns successfully" Apr 24 23:40:05.347013 containerd[2128]: time="2026-04-24T23:40:05.346933629Z" level=info msg="shim disconnected" id=ae99afb3a702ac2bf1f45f28d8d6420ac22906750e01ff184365e2257503bdee namespace=k8s.io Apr 24 23:40:05.347013 containerd[2128]: time="2026-04-24T23:40:05.347009277Z" level=warning msg="cleaning up after shim disconnected" id=ae99afb3a702ac2bf1f45f28d8d6420ac22906750e01ff184365e2257503bdee namespace=k8s.io Apr 24 23:40:05.347563 containerd[2128]: time="2026-04-24T23:40:05.347032029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:40:05.418815 sshd[5580]: Accepted publickey for core from 20.229.252.112 port 52576 ssh2: RSA SHA256:EpOBCscCvamodiF49drNiIRDMxdv0LtYbixE7WaoRrA Apr 24 23:40:05.422114 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:40:05.432854 systemd-logind[2096]: New session 25 of user core. Apr 24 23:40:05.441917 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 24 23:40:05.477123 systemd[1]: run-containerd-runc-k8s.io-ae99afb3a702ac2bf1f45f28d8d6420ac22906750e01ff184365e2257503bdee-runc.hSwCZA.mount: Deactivated successfully. Apr 24 23:40:05.477723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae99afb3a702ac2bf1f45f28d8d6420ac22906750e01ff184365e2257503bdee-rootfs.mount: Deactivated successfully. Apr 24 23:40:06.145967 containerd[2128]: time="2026-04-24T23:40:06.145786329Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 23:40:06.176454 containerd[2128]: time="2026-04-24T23:40:06.176379237Z" level=info msg="CreateContainer within sandbox \"fb252bdaaa797d6d079fec34eb03fed76195ddab42460ffa0f1c0f66a3c49ab5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60d916c6e54edca2120f5479f342dc77d19ab13aa492e5f9de85115eed5a0794\"" Apr 24 23:40:06.178087 containerd[2128]: time="2026-04-24T23:40:06.177461169Z" level=info msg="StartContainer for \"60d916c6e54edca2120f5479f342dc77d19ab13aa492e5f9de85115eed5a0794\"" Apr 24 23:40:06.297687 containerd[2128]: time="2026-04-24T23:40:06.297431170Z" level=info msg="StartContainer for \"60d916c6e54edca2120f5479f342dc77d19ab13aa492e5f9de85115eed5a0794\" returns successfully" Apr 24 23:40:06.480572 systemd[1]: run-containerd-runc-k8s.io-60d916c6e54edca2120f5479f342dc77d19ab13aa492e5f9de85115eed5a0794-runc.Qh1Skw.mount: Deactivated successfully. Apr 24 23:40:06.594141 kubelet[3614]: E0424 23:40:06.593338 3614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-tqj2n" podUID="71d95971-fd31-45e7-a37a-2abc402c113f" Apr 24 23:40:06.755265 update_engine[2098]: I20260424 23:40:06.753386 2098 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 24 23:40:06.755265 update_engine[2098]: I20260424 23:40:06.753450 2098 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 24 23:40:06.755265 update_engine[2098]: I20260424 23:40:06.753871 2098 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 24 23:40:06.759318 update_engine[2098]: I20260424 23:40:06.758618 2098 omaha_request_params.cc:62] Current group set to lts Apr 24 23:40:06.759318 update_engine[2098]: I20260424 23:40:06.759145 2098 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 24 23:40:06.759318 update_engine[2098]: I20260424 23:40:06.759244 2098 update_attempter.cc:643] Scheduling an action processor start. Apr 24 23:40:06.760920 update_engine[2098]: I20260424 23:40:06.759851 2098 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 24 23:40:06.760920 update_engine[2098]: I20260424 23:40:06.760010 2098 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 24 23:40:06.760920 update_engine[2098]: I20260424 23:40:06.760128 2098 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 24 23:40:06.760920 update_engine[2098]: I20260424 23:40:06.760149 2098 omaha_request_action.cc:272] Request: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: Apr 24 23:40:06.760920 update_engine[2098]: I20260424 23:40:06.760166 2098 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:40:06.762954 locksmithd[2156]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 24 23:40:06.772667 update_engine[2098]: I20260424 23:40:06.771811 2098 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:40:06.774146 update_engine[2098]: I20260424 23:40:06.774032 2098 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:40:06.801344 update_engine[2098]: E20260424 23:40:06.800880 2098 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:40:06.801344 update_engine[2098]: I20260424 23:40:06.801007 2098 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 24 23:40:07.061917 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 24 23:40:07.176538 kubelet[3614]: I0424 23:40:07.176205 3614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sktd6" podStartSLOduration=5.176144842 podStartE2EDuration="5.176144842s" podCreationTimestamp="2026-04-24 23:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:40:07.176104738 +0000 UTC m=+130.872451479" watchObservedRunningTime="2026-04-24 23:40:07.176144842 +0000 UTC m=+130.872491475" Apr 24 23:40:11.329497 systemd-networkd[1690]: lxc_health: Link UP Apr 24 23:40:11.335511 systemd-networkd[1690]: lxc_health: Gained carrier Apr 24 23:40:11.338727 (udev-worker)[6237]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:40:12.770406 systemd-networkd[1690]: lxc_health: Gained IPv6LL Apr 24 23:40:15.094680 ntpd[2083]: Listen normally on 13 lxc_health [fe80::b83f:45ff:fe0a:fdba%14]:123 Apr 24 23:40:15.096870 ntpd[2083]: 24 Apr 23:40:15 ntpd[2083]: Listen normally on 13 lxc_health [fe80::b83f:45ff:fe0a:fdba%14]:123 Apr 24 23:40:15.297841 kubelet[3614]: E0424 23:40:15.297231 3614 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56236->127.0.0.1:43903: write tcp 127.0.0.1:56236->127.0.0.1:43903: write: broken pipe Apr 24 23:40:16.756351 update_engine[2098]: I20260424 23:40:16.756006 2098 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 23:40:16.757530 update_engine[2098]: I20260424 23:40:16.757121 2098 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 23:40:16.757530 update_engine[2098]: I20260424 23:40:16.757463 2098 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 23:40:16.758863 update_engine[2098]: E20260424 23:40:16.758718 2098 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 23:40:16.758863 update_engine[2098]: I20260424 23:40:16.758819 2098 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 24 23:40:17.733662 sshd[5580]: pam_unix(sshd:session): session closed for user core Apr 24 23:40:17.744536 systemd-logind[2096]: Session 25 logged out. Waiting for processes to exit. Apr 24 23:40:17.752386 systemd[1]: sshd@24-172.31.21.128:22-20.229.252.112:52576.service: Deactivated successfully. Apr 24 23:40:17.761656 systemd[1]: session-25.scope: Deactivated successfully. Apr 24 23:40:17.766077 systemd-logind[2096]: Removed session 25.