Apr 13 19:23:55.271112 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 13 19:23:55.271209 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:23:55.271238 kernel: KASLR disabled due to lack of seed Apr 13 19:23:55.271255 kernel: efi: EFI v2.7 by EDK II Apr 13 19:23:55.271272 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 13 19:23:55.271288 kernel: ACPI: Early table checksum verification disabled Apr 13 19:23:55.271306 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 13 19:23:55.271322 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 19:23:55.271339 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 19:23:55.271354 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 19:23:55.271375 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 19:23:55.271391 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 13 19:23:55.271407 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 13 19:23:55.271424 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 13 19:23:55.271443 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 19:23:55.271464 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 13 19:23:55.271482 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 13 19:23:55.271498 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 13 19:23:55.271515 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 13 19:23:55.271532 kernel: printk: bootconsole [uart0] enabled Apr 13 19:23:55.271549 kernel: NUMA: Failed to initialise from firmware Apr 13 19:23:55.271566 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:55.271582 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 13 19:23:55.271599 kernel: Zone ranges: Apr 13 19:23:55.271616 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:23:55.271632 kernel: DMA32 empty Apr 13 19:23:55.271653 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 13 19:23:55.271671 kernel: Movable zone start for each node Apr 13 19:23:55.271687 kernel: Early memory node ranges Apr 13 19:23:55.271704 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 13 19:23:55.271721 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 13 19:23:55.271738 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 13 19:23:55.271755 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 13 19:23:55.271772 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 13 19:23:55.271788 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 13 19:23:55.271805 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 13 19:23:55.271822 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 13 19:23:55.271838 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:55.271860 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 13 19:23:55.271877 kernel: psci: probing for conduit method from ACPI. Apr 13 19:23:55.271901 kernel: psci: PSCIv1.0 detected in firmware. Apr 13 19:23:55.271919 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:23:55.271937 kernel: psci: Trusted OS migration not required Apr 13 19:23:55.271958 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:23:55.271976 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 13 19:23:55.271994 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:23:55.272012 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:23:55.272029 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:23:55.272047 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:23:55.272065 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:23:55.272083 kernel: CPU features: detected: Spectre-v2 Apr 13 19:23:55.272100 kernel: CPU features: detected: Spectre-v3a Apr 13 19:23:55.272118 kernel: CPU features: detected: Spectre-BHB Apr 13 19:23:55.272137 kernel: CPU features: detected: ARM erratum 1742098 Apr 13 19:23:55.272181 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 13 19:23:55.274239 kernel: alternatives: applying boot alternatives Apr 13 19:23:55.274272 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:55.274292 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:23:55.274310 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:23:55.274329 kernel: Fallback order for Node 0: 0 Apr 13 19:23:55.274346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 13 19:23:55.274364 kernel: Policy zone: Normal Apr 13 19:23:55.274382 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:23:55.274400 kernel: software IO TLB: area num 2. Apr 13 19:23:55.274418 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 13 19:23:55.274446 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 13 19:23:55.274465 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:23:55.274483 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:23:55.274502 kernel: rcu: RCU event tracing is enabled. Apr 13 19:23:55.274521 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:23:55.274539 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:23:55.274558 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:23:55.274576 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:23:55.274595 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:23:55.274613 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:23:55.274632 kernel: GICv3: 96 SPIs implemented Apr 13 19:23:55.274654 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:23:55.274672 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:23:55.274690 kernel: GICv3: GICv3 features: 16 PPIs Apr 13 19:23:55.274707 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 13 19:23:55.274725 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 13 19:23:55.274743 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:23:55.274763 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:23:55.274781 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 13 19:23:55.274800 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 13 19:23:55.274818 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 13 19:23:55.274947 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:23:55.275702 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 13 19:23:55.276432 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 13 19:23:55.276790 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 13 19:23:55.277136 kernel: Console: colour dummy device 80x25 Apr 13 19:23:55.277243 kernel: printk: console [tty1] enabled Apr 13 19:23:55.277264 kernel: ACPI: Core revision 20230628 Apr 13 19:23:55.277283 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 13 19:23:55.277302 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:23:55.277321 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:23:55.277339 kernel: landlock: Up and running. Apr 13 19:23:55.277363 kernel: SELinux: Initializing. Apr 13 19:23:55.277382 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:55.277401 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:55.277419 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:55.277438 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:55.277456 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:23:55.277475 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:23:55.277493 kernel: Platform MSI: ITS@0x10080000 domain created Apr 13 19:23:55.277511 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 13 19:23:55.277533 kernel: Remapping and enabling EFI services. Apr 13 19:23:55.277551 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:23:55.277570 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:23:55.277588 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 13 19:23:55.277607 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 13 19:23:55.277625 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 13 19:23:55.277643 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:23:55.277661 kernel: SMP: Total of 2 processors activated. Apr 13 19:23:55.277679 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:23:55.277703 kernel: CPU features: detected: 32-bit EL1 Support Apr 13 19:23:55.277722 kernel: CPU features: detected: CRC32 instructions Apr 13 19:23:55.277740 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:23:55.277769 kernel: alternatives: applying system-wide alternatives Apr 13 19:23:55.277791 kernel: devtmpfs: initialized Apr 13 19:23:55.277810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:23:55.277829 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:23:55.277847 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:23:55.277866 kernel: SMBIOS 3.0.0 present. Apr 13 19:23:55.277889 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 13 19:23:55.277908 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:23:55.277927 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:23:55.277946 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:23:55.277965 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:23:55.277984 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:23:55.278003 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Apr 13 19:23:55.278022 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:23:55.278046 kernel: cpuidle: using governor menu Apr 13 19:23:55.278064 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:23:55.278083 kernel: ASID allocator initialised with 65536 entries Apr 13 19:23:55.278102 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:23:55.278121 kernel: Serial: AMBA PL011 UART driver Apr 13 19:23:55.278139 kernel: Modules: 17488 pages in range for non-PLT usage Apr 13 19:23:55.281069 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:23:55.281113 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:23:55.281135 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:23:55.281185 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:23:55.281207 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:23:55.281226 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:23:55.281245 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:23:55.281264 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:23:55.281283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:23:55.281302 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:23:55.281321 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:23:55.281339 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:23:55.281365 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:23:55.281384 kernel: ACPI: Interpreter enabled Apr 13 19:23:55.281402 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:23:55.281421 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:23:55.281441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 13 19:23:55.281878 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:23:55.282768 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:23:55.283054 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:23:55.283347 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 13 19:23:55.283613 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 13 19:23:55.283641 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 13 19:23:55.283661 kernel: acpiphp: Slot [1] registered Apr 13 19:23:55.283680 kernel: acpiphp: Slot [2] registered Apr 13 19:23:55.283699 kernel: acpiphp: Slot [3] registered Apr 13 19:23:55.283717 kernel: acpiphp: Slot [4] registered Apr 13 19:23:55.283736 kernel: acpiphp: Slot [5] registered Apr 13 19:23:55.283761 kernel: acpiphp: Slot [6] registered Apr 13 19:23:55.283781 kernel: acpiphp: Slot [7] registered Apr 13 19:23:55.283800 kernel: acpiphp: Slot [8] registered Apr 13 19:23:55.283818 kernel: acpiphp: Slot [9] registered Apr 13 19:23:55.283836 kernel: acpiphp: Slot [10] registered Apr 13 19:23:55.283855 kernel: acpiphp: Slot [11] registered Apr 13 19:23:55.283895 kernel: acpiphp: Slot [12] registered Apr 13 19:23:55.283920 kernel: acpiphp: Slot [13] registered Apr 13 19:23:55.283939 kernel: acpiphp: Slot [14] registered Apr 13 19:23:55.283957 kernel: acpiphp: Slot [15] registered Apr 13 19:23:55.283981 kernel: acpiphp: Slot [16] registered Apr 13 19:23:55.284000 kernel: acpiphp: Slot [17] registered Apr 13 19:23:55.284018 kernel: acpiphp: Slot [18] registered Apr 13 19:23:55.284037 kernel: acpiphp: Slot [19] registered Apr 13 19:23:55.284055 kernel: acpiphp: Slot [20] registered Apr 13 19:23:55.284074 kernel: acpiphp: Slot [21] registered Apr 13 19:23:55.284092 kernel: acpiphp: Slot [22] registered Apr 13 19:23:55.284111 kernel: acpiphp: Slot [23] registered Apr 13 19:23:55.284129 kernel: acpiphp: Slot [24] registered Apr 13 19:23:55.284152 kernel: acpiphp: Slot [25] registered Apr 13 19:23:55.284203 kernel: acpiphp: Slot [26] registered Apr 13 19:23:55.284222 kernel: acpiphp: Slot [27] registered Apr 13 19:23:55.284241 kernel: acpiphp: Slot [28] registered Apr 13 19:23:55.284286 kernel: acpiphp: Slot [29] registered Apr 13 19:23:55.284305 kernel: acpiphp: Slot [30] registered Apr 13 19:23:55.284324 kernel: acpiphp: Slot [31] registered Apr 13 19:23:55.284342 kernel: PCI host bridge to bus 0000:00 Apr 13 19:23:55.284585 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 13 19:23:55.284831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:23:55.285073 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:55.285666 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 13 19:23:55.285940 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 13 19:23:55.286216 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 13 19:23:55.286458 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 13 19:23:55.286707 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 19:23:55.289314 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 13 19:23:55.289568 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:55.289814 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 19:23:55.290033 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 13 19:23:55.291337 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 13 19:23:55.291564 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 13 19:23:55.291783 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:55.291975 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 13 19:23:55.294626 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:23:55.294860 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:55.294888 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:23:55.294909 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:23:55.294928 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:23:55.294947 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:23:55.294977 kernel: iommu: Default domain type: Translated Apr 13 19:23:55.294996 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:23:55.295015 kernel: efivars: Registered efivars operations Apr 13 19:23:55.295034 kernel: vgaarb: loaded Apr 13 19:23:55.295053 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:23:55.295072 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:23:55.295091 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:23:55.295110 kernel: pnp: PnP ACPI init Apr 13 19:23:55.295517 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 13 19:23:55.295556 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:23:55.295575 kernel: NET: Registered PF_INET protocol family Apr 13 19:23:55.295594 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:23:55.295614 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:23:55.295633 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:23:55.295652 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:23:55.295671 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:23:55.295690 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:23:55.295713 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:55.295733 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:55.295752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:23:55.295770 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:23:55.295789 kernel: kvm [1]: HYP mode not available Apr 13 19:23:55.295807 kernel: Initialise system trusted keyrings Apr 13 19:23:55.295826 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:23:55.295845 kernel: Key type asymmetric registered Apr 13 19:23:55.295864 kernel: Asymmetric key parser 'x509' registered Apr 13 19:23:55.295887 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:23:55.295907 kernel: io scheduler mq-deadline registered Apr 13 19:23:55.295925 kernel: io scheduler kyber registered Apr 13 19:23:55.295944 kernel: io scheduler bfq registered Apr 13 19:23:55.296247 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 13 19:23:55.296277 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:23:55.296296 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:23:55.296316 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 13 19:23:55.296335 kernel: ACPI: button: Sleep Button [SLPB] Apr 13 19:23:55.296360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:23:55.296380 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:23:55.296605 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 13 19:23:55.296634 kernel: printk: console [ttyS0] disabled Apr 13 19:23:55.296654 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 13 19:23:55.296673 kernel: printk: console [ttyS0] enabled Apr 13 19:23:55.296692 kernel: printk: bootconsole [uart0] disabled Apr 13 19:23:55.296711 kernel: thunder_xcv, ver 1.0 Apr 13 19:23:55.296730 kernel: thunder_bgx, ver 1.0 Apr 13 19:23:55.296756 kernel: nicpf, ver 1.0 Apr 13 19:23:55.296775 kernel: nicvf, ver 1.0 Apr 13 19:23:55.296997 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:23:55.298365 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:23:54 UTC (1776108234) Apr 13 19:23:55.298412 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:23:55.298432 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 13 19:23:55.298452 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:23:55.298471 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:23:55.298502 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:23:55.298521 kernel: Segment Routing with IPv6 Apr 13 19:23:55.298540 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:23:55.298559 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:23:55.298578 kernel: Key type dns_resolver registered Apr 13 19:23:55.298597 kernel: registered taskstats version 1 Apr 13 19:23:55.298615 kernel: Loading compiled-in X.509 certificates Apr 13 19:23:55.298635 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:23:55.298653 kernel: Key type .fscrypt registered Apr 13 19:23:55.298677 kernel: Key type fscrypt-provisioning registered Apr 13 19:23:55.298696 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:23:55.298715 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:23:55.298734 kernel: ima: No architecture policies found Apr 13 19:23:55.298755 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:23:55.298774 kernel: clk: Disabling unused clocks Apr 13 19:23:55.298792 kernel: Freeing unused kernel memory: 39424K Apr 13 19:23:55.298813 kernel: Run /init as init process Apr 13 19:23:55.298854 kernel: with arguments: Apr 13 19:23:55.298884 kernel: /init Apr 13 19:23:55.298904 kernel: with environment: Apr 13 19:23:55.298922 kernel: HOME=/ Apr 13 19:23:55.298941 kernel: TERM=linux Apr 13 19:23:55.298965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:55.298989 systemd[1]: Detected virtualization amazon. Apr 13 19:23:55.299011 systemd[1]: Detected architecture arm64. Apr 13 19:23:55.299031 systemd[1]: Running in initrd. Apr 13 19:23:55.299059 systemd[1]: No hostname configured, using default hostname. Apr 13 19:23:55.299079 systemd[1]: Hostname set to . Apr 13 19:23:55.299101 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:55.299121 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:23:55.299142 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:55.300247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:55.300296 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:23:55.300318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:55.300351 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:23:55.300373 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:23:55.300397 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:23:55.300418 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:23:55.300439 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:55.300460 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:55.300485 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:55.300506 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:55.300526 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:55.300547 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:55.300567 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:55.300588 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:55.300611 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:23:55.300633 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:23:55.300655 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:55.300682 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:55.300703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:55.300724 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:55.300744 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:23:55.300764 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:55.300785 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:23:55.300805 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:23:55.300826 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:55.300846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:55.300872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:55.300893 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:55.300913 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:55.300934 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:23:55.300956 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:23:55.300982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:55.301052 systemd-journald[251]: Collecting audit messages is disabled. Apr 13 19:23:55.301098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:55.301126 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:23:55.301147 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:23:55.301778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:55.301803 kernel: Bridge firewalling registered Apr 13 19:23:55.301824 systemd-journald[251]: Journal started Apr 13 19:23:55.301863 systemd-journald[251]: Runtime Journal (/run/log/journal/ec230bd86ae191089872f911b67d0dc3) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:55.253236 systemd-modules-load[253]: Inserted module 'overlay' Apr 13 19:23:55.296435 systemd-modules-load[253]: Inserted module 'br_netfilter' Apr 13 19:23:55.311460 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:55.316249 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:55.327587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:55.339362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:55.349399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:55.370275 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:55.379712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:55.392614 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:23:55.402591 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:55.417559 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:55.437064 dracut-cmdline[287]: dracut-dracut-053 Apr 13 19:23:55.445017 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:55.494558 systemd-resolved[291]: Positive Trust Anchors: Apr 13 19:23:55.496003 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:55.496071 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:55.638181 kernel: SCSI subsystem initialized Apr 13 19:23:55.644195 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:23:55.657204 kernel: iscsi: registered transport (tcp) Apr 13 19:23:55.679192 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:23:55.679265 kernel: QLogic iSCSI HBA Driver Apr 13 19:23:55.748199 kernel: random: crng init done Apr 13 19:23:55.748772 systemd-resolved[291]: Defaulting to hostname 'linux'. Apr 13 19:23:55.755106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:55.760918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:55.783232 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:55.794580 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:23:55.831719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:23:55.831809 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:23:55.831838 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:23:55.900206 kernel: raid6: neonx8 gen() 6735 MB/s Apr 13 19:23:55.918193 kernel: raid6: neonx4 gen() 6585 MB/s Apr 13 19:23:55.935196 kernel: raid6: neonx2 gen() 5482 MB/s Apr 13 19:23:55.952193 kernel: raid6: neonx1 gen() 3963 MB/s Apr 13 19:23:55.969193 kernel: raid6: int64x8 gen() 3812 MB/s Apr 13 19:23:55.986195 kernel: raid6: int64x4 gen() 3694 MB/s Apr 13 19:23:56.003195 kernel: raid6: int64x2 gen() 3609 MB/s Apr 13 19:23:56.021277 kernel: raid6: int64x1 gen() 2762 MB/s Apr 13 19:23:56.021315 kernel: raid6: using algorithm neonx8 gen() 6735 MB/s Apr 13 19:23:56.040227 kernel: raid6: .... xor() 4866 MB/s, rmw enabled Apr 13 19:23:56.040279 kernel: raid6: using neon recovery algorithm Apr 13 19:23:56.048196 kernel: xor: measuring software checksum speed Apr 13 19:23:56.050538 kernel: 8regs : 10246 MB/sec Apr 13 19:23:56.050572 kernel: 32regs : 11590 MB/sec Apr 13 19:23:56.051864 kernel: arm64_neon : 9273 MB/sec Apr 13 19:23:56.051896 kernel: xor: using function: 32regs (11590 MB/sec) Apr 13 19:23:56.137212 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:23:56.156397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:56.168612 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:56.204581 systemd-udevd[471]: Using default interface naming scheme 'v255'. Apr 13 19:23:56.214676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:56.234470 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:23:56.266290 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Apr 13 19:23:56.325224 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:56.335519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:56.462934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:56.483749 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:23:56.519284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:56.524520 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:56.530252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:56.533290 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:56.544441 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:23:56.589506 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:56.668052 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:23:56.668120 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 13 19:23:56.667669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:56.683647 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 19:23:56.685417 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 19:23:56.667901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:56.674400 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:56.679740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:56.680047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:56.701695 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:ee:a7:14:92:f9 Apr 13 19:23:56.686572 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:56.708648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:56.720396 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:56.735757 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:23:56.735823 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 19:23:56.750187 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 19:23:56.755496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:56.765598 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:23:56.765635 kernel: GPT:9289727 != 33554431 Apr 13 19:23:56.765660 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:23:56.765684 kernel: GPT:9289727 != 33554431 Apr 13 19:23:56.765707 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:23:56.766892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:56.778501 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:56.841399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:56.862530 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (541) Apr 13 19:23:56.937183 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (520) Apr 13 19:23:56.964501 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 19:23:57.000147 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 19:23:57.030897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:57.045599 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:57.052570 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:57.067425 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:23:57.082642 disk-uuid[663]: Primary Header is updated. Apr 13 19:23:57.082642 disk-uuid[663]: Secondary Entries is updated. Apr 13 19:23:57.082642 disk-uuid[663]: Secondary Header is updated. Apr 13 19:23:57.097331 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:57.106222 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:57.113250 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:58.120941 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:58.121020 disk-uuid[664]: The operation has completed successfully. Apr 13 19:23:58.304316 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:23:58.304501 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:23:58.359443 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:23:58.369790 sh[1007]: Success Apr 13 19:23:58.395286 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:23:58.499738 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:23:58.509330 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:23:58.523375 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:23:58.545436 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:23:58.545500 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:58.545527 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:23:58.548861 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:23:58.548898 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:23:58.672203 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:23:58.674653 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:23:58.675208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:23:58.691564 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:23:58.698468 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:23:58.729457 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:58.729535 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:58.730924 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:58.742777 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:58.763758 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:23:58.766416 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:58.780133 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:23:58.793624 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:23:58.884472 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:58.900491 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:58.959095 systemd-networkd[1199]: lo: Link UP Apr 13 19:23:58.959117 systemd-networkd[1199]: lo: Gained carrier Apr 13 19:23:58.964998 systemd-networkd[1199]: Enumeration completed Apr 13 19:23:58.966047 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:58.966054 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:58.969948 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:58.970903 systemd-networkd[1199]: eth0: Link UP Apr 13 19:23:58.970912 systemd-networkd[1199]: eth0: Gained carrier Apr 13 19:23:58.970931 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:58.974677 systemd[1]: Reached target network.target - Network. Apr 13 19:23:59.014249 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.17.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:59.302088 ignition[1126]: Ignition 2.19.0 Apr 13 19:23:59.302635 ignition[1126]: Stage: fetch-offline Apr 13 19:23:59.304570 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:59.304598 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:59.305470 ignition[1126]: Ignition finished successfully Apr 13 19:23:59.314709 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:59.326475 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:23:59.353849 ignition[1209]: Ignition 2.19.0 Apr 13 19:23:59.353869 ignition[1209]: Stage: fetch Apr 13 19:23:59.355330 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:59.355701 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:59.355878 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:59.374070 ignition[1209]: PUT result: OK Apr 13 19:23:59.377396 ignition[1209]: parsed url from cmdline: "" Apr 13 19:23:59.377412 ignition[1209]: no config URL provided Apr 13 19:23:59.377429 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:23:59.377454 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:23:59.377485 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:59.381933 ignition[1209]: PUT result: OK Apr 13 19:23:59.382028 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 19:23:59.387033 ignition[1209]: GET result: OK Apr 13 19:23:59.399772 unknown[1209]: fetched base config from "system" Apr 13 19:23:59.387247 ignition[1209]: parsing config with SHA512: 06b15f65aa7988f55507d8bf7352b099fde20b2a3f37a25d037c5e4b4c20e7767fab4085760a65d58004821574e76551bfe001b13a76eab7b452c702e47ffa3d Apr 13 19:23:59.399810 unknown[1209]: fetched base config from "system" Apr 13 19:23:59.401150 ignition[1209]: fetch: fetch complete Apr 13 19:23:59.399827 unknown[1209]: fetched user config from "aws" Apr 13 19:23:59.401922 ignition[1209]: fetch: fetch passed Apr 13 19:23:59.407509 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:23:59.402034 ignition[1209]: Ignition finished successfully Apr 13 19:23:59.429619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:23:59.466333 ignition[1215]: Ignition 2.19.0 Apr 13 19:23:59.466876 ignition[1215]: Stage: kargs Apr 13 19:23:59.467639 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:59.467668 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:59.467824 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:59.474407 ignition[1215]: PUT result: OK Apr 13 19:23:59.481684 ignition[1215]: kargs: kargs passed Apr 13 19:23:59.481832 ignition[1215]: Ignition finished successfully Apr 13 19:23:59.488026 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:23:59.503575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:23:59.528492 ignition[1222]: Ignition 2.19.0 Apr 13 19:23:59.528513 ignition[1222]: Stage: disks Apr 13 19:23:59.529091 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:59.529115 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:59.529313 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:59.539478 ignition[1222]: PUT result: OK Apr 13 19:23:59.544706 ignition[1222]: disks: disks passed Apr 13 19:23:59.544809 ignition[1222]: Ignition finished successfully Apr 13 19:23:59.548006 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:23:59.553672 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:59.556675 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:23:59.561859 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:59.566712 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:59.571288 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:59.584553 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:23:59.624441 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 19:23:59.630225 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:23:59.644568 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:23:59.729206 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:23:59.731102 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:23:59.737756 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:59.751352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:59.756376 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:23:59.764631 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 19:23:59.764732 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:23:59.764786 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:59.786205 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1249) Apr 13 19:23:59.793782 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:59.793866 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:59.795527 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:59.796060 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:23:59.807195 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:59.807916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:23:59.820585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:24:00.076523 systemd-networkd[1199]: eth0: Gained IPv6LL Apr 13 19:24:00.209589 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:24:00.218537 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:24:00.227121 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:24:00.235430 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:24:00.558231 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:24:00.574372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:24:00.584526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:24:00.600300 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:24:00.604432 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:24:00.653088 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:24:00.656655 ignition[1363]: INFO : Ignition 2.19.0 Apr 13 19:24:00.656655 ignition[1363]: INFO : Stage: mount Apr 13 19:24:00.656655 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:00.656655 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:00.656655 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:00.671256 ignition[1363]: INFO : PUT result: OK Apr 13 19:24:00.677100 ignition[1363]: INFO : mount: mount passed Apr 13 19:24:00.681515 ignition[1363]: INFO : Ignition finished successfully Apr 13 19:24:00.681199 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:24:00.693437 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:24:00.741357 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:24:00.764216 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Apr 13 19:24:00.768572 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:24:00.768648 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:24:00.768675 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:24:00.775204 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:24:00.778469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:24:00.818293 ignition[1391]: INFO : Ignition 2.19.0 Apr 13 19:24:00.820460 ignition[1391]: INFO : Stage: files Apr 13 19:24:00.820460 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:00.820460 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:00.820460 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:00.831546 ignition[1391]: INFO : PUT result: OK Apr 13 19:24:00.837942 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:24:00.841668 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:24:00.841668 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:24:00.887017 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:24:00.891327 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:24:00.895703 unknown[1391]: wrote ssh authorized keys file for user: core Apr 13 19:24:00.898523 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:24:00.904214 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:24:00.904214 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 19:24:00.904214 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:24:00.917310 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:24:01.013526 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:24:01.174268 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:24:01.174268 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:24:01.174268 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 13 19:24:01.417997 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 13 19:24:01.563446 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:24:01.563446 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:24:01.571481 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:24:02.050185 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 13 19:24:02.451299 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:24:02.456243 ignition[1391]: INFO : files: files passed Apr 13 19:24:02.456243 ignition[1391]: INFO : Ignition finished successfully Apr 13 19:24:02.465840 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:24:02.504362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:24:02.551026 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:24:02.551026 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:24:02.508421 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:24:02.574382 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:24:02.533382 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:24:02.533575 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:24:02.552237 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:24:02.553321 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:24:02.580574 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:24:02.644418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:24:02.644848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:24:02.652822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:24:02.655403 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:24:02.657795 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:24:02.671488 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:24:02.702755 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:24:02.717485 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:24:02.743531 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:24:02.748634 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:24:02.751917 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:24:02.753888 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:24:02.754128 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:24:02.765426 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:24:02.770039 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:24:02.773120 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:24:02.779845 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:24:02.782630 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:24:02.785807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:24:02.794970 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:24:02.798025 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:24:02.800719 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:24:02.804131 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:24:02.809128 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:24:02.809415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:24:02.822386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:24:02.825882 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:24:02.833624 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:24:02.836427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:24:02.839835 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:24:02.840409 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:24:02.845306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:24:02.845992 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:24:02.850654 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:24:02.850997 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:24:02.875593 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:24:02.883686 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:24:02.895522 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:24:02.895844 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:24:02.904710 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:24:02.904957 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:24:02.916548 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:24:02.920295 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:24:02.938062 ignition[1444]: INFO : Ignition 2.19.0 Apr 13 19:24:02.938062 ignition[1444]: INFO : Stage: umount Apr 13 19:24:02.942281 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:02.942281 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:02.942281 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:02.950969 ignition[1444]: INFO : PUT result: OK Apr 13 19:24:02.958243 ignition[1444]: INFO : umount: umount passed Apr 13 19:24:02.958243 ignition[1444]: INFO : Ignition finished successfully Apr 13 19:24:02.962347 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:24:02.965541 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:24:02.976391 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:24:02.977143 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:24:02.977255 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:24:02.979688 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:24:02.979793 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:24:02.983413 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:24:02.983520 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:24:02.991520 systemd[1]: Stopped target network.target - Network. Apr 13 19:24:02.993914 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:24:02.994032 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:24:03.014026 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:24:03.015990 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:24:03.021181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:24:03.024133 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:24:03.026245 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:24:03.028475 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:24:03.028553 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:24:03.030823 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:24:03.030900 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:24:03.036667 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:24:03.036767 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:24:03.040530 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:24:03.040626 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:24:03.043926 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:24:03.047696 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:24:03.052375 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:24:03.052556 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:24:03.056222 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:24:03.056394 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:24:03.060069 systemd-networkd[1199]: eth0: DHCPv6 lease lost Apr 13 19:24:03.086347 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:24:03.086827 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:24:03.095088 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:24:03.095362 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:24:03.104543 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:24:03.104662 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:24:03.120476 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:24:03.121064 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:24:03.125696 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:24:03.130171 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:24:03.130288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:24:03.135597 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:24:03.135691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:24:03.137419 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:24:03.137525 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:24:03.145596 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:24:03.186949 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:24:03.189446 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:24:03.194511 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:24:03.194650 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:24:03.198919 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:24:03.199817 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:24:03.205639 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:24:03.205835 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:24:03.212523 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:24:03.212623 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:24:03.219216 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:24:03.219320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:24:03.234510 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:24:03.242710 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:24:03.242853 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:24:03.245660 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:24:03.245745 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:24:03.248704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:24:03.248786 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:24:03.257000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:24:03.257101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:24:03.263584 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:24:03.263817 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:24:03.299663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:24:03.300104 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:24:03.304991 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:24:03.322705 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:24:03.354741 systemd[1]: Switching root. Apr 13 19:24:03.392208 systemd-journald[251]: Journal stopped Apr 13 19:24:05.195341 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 13 19:24:05.195481 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:24:05.195528 kernel: SELinux: policy capability open_perms=1 Apr 13 19:24:05.195567 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:24:05.195599 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:24:05.195641 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:24:05.195671 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:24:05.195701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:24:05.195732 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:24:05.195764 kernel: audit: type=1403 audit(1776108243.702:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:24:05.195798 systemd[1]: Successfully loaded SELinux policy in 52.081ms. Apr 13 19:24:05.195855 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.257ms. Apr 13 19:24:05.195892 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:24:05.195923 systemd[1]: Detected virtualization amazon. Apr 13 19:24:05.195957 systemd[1]: Detected architecture arm64. Apr 13 19:24:05.195987 systemd[1]: Detected first boot. Apr 13 19:24:05.196020 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:24:05.196053 zram_generator::config[1507]: No configuration found. Apr 13 19:24:05.196090 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:24:05.196124 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:24:05.196185 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 19:24:05.196225 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:24:05.196257 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:24:05.196288 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:24:05.196317 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:24:05.196349 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:24:05.196384 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:24:05.196417 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:24:05.196452 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:24:05.196485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:24:05.196516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:24:05.196548 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:24:05.196577 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:24:05.196612 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:24:05.196644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:24:05.196676 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 19:24:05.196708 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:24:05.196741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:24:05.196771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:24:05.196803 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:24:05.196834 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:24:05.196866 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:24:05.196896 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:24:05.196925 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:24:05.196955 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:24:05.196991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:24:05.197020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:24:05.197054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:24:05.197087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:24:05.197117 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:24:05.197149 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:24:05.197201 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:24:05.197236 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:24:05.197269 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:24:05.197306 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:24:05.197338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:24:05.197371 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:24:05.197404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:24:05.197434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:24:05.197464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:24:05.197505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:24:05.197537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:24:05.197569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:24:05.197604 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:24:05.197636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:24:05.197666 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:24:05.197696 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 19:24:05.197730 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 19:24:05.197763 kernel: ACPI: bus type drm_connector registered Apr 13 19:24:05.197793 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:24:05.197822 kernel: fuse: init (API version 7.39) Apr 13 19:24:05.197851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:24:05.197885 kernel: loop: module loaded Apr 13 19:24:05.197916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:24:05.197945 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:24:05.197977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:24:05.198067 systemd-journald[1606]: Collecting audit messages is disabled. Apr 13 19:24:05.198129 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:24:05.198195 systemd-journald[1606]: Journal started Apr 13 19:24:05.198256 systemd-journald[1606]: Runtime Journal (/run/log/journal/ec230bd86ae191089872f911b67d0dc3) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:24:05.219400 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:24:05.226814 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:24:05.233729 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:24:05.238677 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:24:05.241563 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:24:05.245810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:24:05.257075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:24:05.262337 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:24:05.262723 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:24:05.266084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:24:05.266473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:24:05.271002 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:24:05.271436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:24:05.276896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:24:05.277335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:24:05.280900 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:24:05.282473 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:24:05.287789 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:24:05.289532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:24:05.294097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:24:05.300049 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:24:05.310618 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:24:05.347830 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:24:05.351857 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:24:05.361383 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:24:05.375326 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:24:05.379110 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:24:05.390423 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:24:05.403551 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:24:05.413311 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:24:05.434503 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:24:05.438946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:24:05.448787 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:24:05.456499 systemd-journald[1606]: Time spent on flushing to /var/log/journal/ec230bd86ae191089872f911b67d0dc3 is 97.312ms for 890 entries. Apr 13 19:24:05.456499 systemd-journald[1606]: System Journal (/var/log/journal/ec230bd86ae191089872f911b67d0dc3) is 8.0M, max 195.6M, 187.6M free. Apr 13 19:24:05.568881 systemd-journald[1606]: Received client request to flush runtime journal. Apr 13 19:24:05.467384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:24:05.483450 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:24:05.490710 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:24:05.510976 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:24:05.531521 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:24:05.534976 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:24:05.539727 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:24:05.582833 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:24:05.607667 udevadm[1663]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 19:24:05.619187 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Apr 13 19:24:05.619228 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Apr 13 19:24:05.627991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:24:05.635404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:24:05.652475 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:24:05.719959 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:24:05.734606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:24:05.780803 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Apr 13 19:24:05.780846 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Apr 13 19:24:05.789788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:24:06.398676 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:24:06.411697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:24:06.477626 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Apr 13 19:24:06.511871 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:24:06.524477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:24:06.563635 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:24:06.684634 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 13 19:24:06.692078 (udev-worker)[1692]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:06.713809 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:24:06.861792 systemd-networkd[1689]: lo: Link UP Apr 13 19:24:06.862383 systemd-networkd[1689]: lo: Gained carrier Apr 13 19:24:06.865431 systemd-networkd[1689]: Enumeration completed Apr 13 19:24:06.865801 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:24:06.871905 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:24:06.872144 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:24:06.875447 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:24:06.879972 systemd-networkd[1689]: eth0: Link UP Apr 13 19:24:06.880607 systemd-networkd[1689]: eth0: Gained carrier Apr 13 19:24:06.880779 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:24:06.901475 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:24:06.909334 systemd-networkd[1689]: eth0: DHCPv4 address 172.31.17.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:24:06.977190 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1690) Apr 13 19:24:06.988748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:24:07.184480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:24:07.218071 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:24:07.235364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:24:07.249508 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:24:07.268221 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:24:07.304946 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:24:07.313068 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:24:07.327439 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:24:07.335073 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:24:07.378004 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:24:07.384106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:24:07.387149 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:24:07.387307 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:24:07.389724 systemd[1]: Reached target machines.target - Containers. Apr 13 19:24:07.394249 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:24:07.404479 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:24:07.420508 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:24:07.423341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:24:07.432418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:24:07.443451 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:24:07.454474 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:24:07.459901 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:24:07.496740 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:24:07.512207 kernel: loop0: detected capacity change from 0 to 114432 Apr 13 19:24:07.526734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:24:07.529237 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:24:07.554797 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:24:07.577279 kernel: loop1: detected capacity change from 0 to 209336 Apr 13 19:24:07.704688 kernel: loop2: detected capacity change from 0 to 114328 Apr 13 19:24:07.743233 kernel: loop3: detected capacity change from 0 to 52536 Apr 13 19:24:07.785212 kernel: loop4: detected capacity change from 0 to 114432 Apr 13 19:24:07.808271 kernel: loop5: detected capacity change from 0 to 209336 Apr 13 19:24:07.838235 kernel: loop6: detected capacity change from 0 to 114328 Apr 13 19:24:07.854237 kernel: loop7: detected capacity change from 0 to 52536 Apr 13 19:24:07.878760 (sd-merge)[1838]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 19:24:07.879879 (sd-merge)[1838]: Merged extensions into '/usr'. Apr 13 19:24:07.890466 systemd[1]: Reloading requested from client PID 1825 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:24:07.890500 systemd[1]: Reloading... Apr 13 19:24:08.012777 systemd-networkd[1689]: eth0: Gained IPv6LL Apr 13 19:24:08.052301 zram_generator::config[1866]: No configuration found. Apr 13 19:24:08.106905 ldconfig[1821]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:24:08.333183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:08.486908 systemd[1]: Reloading finished in 595 ms. Apr 13 19:24:08.515733 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:24:08.519455 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:24:08.523149 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:24:08.548557 systemd[1]: Starting ensure-sysext.service... Apr 13 19:24:08.552673 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:24:08.568206 systemd[1]: Reloading requested from client PID 1927 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:24:08.568409 systemd[1]: Reloading... Apr 13 19:24:08.615071 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:24:08.616942 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:24:08.619048 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:24:08.622009 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Apr 13 19:24:08.622361 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Apr 13 19:24:08.632047 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:24:08.633398 systemd-tmpfiles[1928]: Skipping /boot Apr 13 19:24:08.676548 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:24:08.676753 systemd-tmpfiles[1928]: Skipping /boot Apr 13 19:24:08.683631 zram_generator::config[1952]: No configuration found. Apr 13 19:24:08.976231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:09.130704 systemd[1]: Reloading finished in 561 ms. Apr 13 19:24:09.165469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:24:09.185474 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:24:09.204248 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:24:09.213595 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:24:09.228471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:24:09.247210 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:24:09.271550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:24:09.284142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:24:09.296897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:24:09.317746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:24:09.323219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:24:09.338677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:24:09.339073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:24:09.352461 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:24:09.360371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:24:09.360760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:24:09.368698 augenrules[2041]: No rules Apr 13 19:24:09.370670 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:24:09.375290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:24:09.380359 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:24:09.418945 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:24:09.436028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:24:09.446784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:24:09.460889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:24:09.469284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:24:09.477805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:24:09.482567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:24:09.483010 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:24:09.508112 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:24:09.518655 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:24:09.527432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:24:09.527787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:24:09.533290 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:24:09.533644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:24:09.541782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:24:09.547535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:24:09.554757 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:24:09.556670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:24:09.581960 systemd[1]: Finished ensure-sysext.service. Apr 13 19:24:09.591538 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:24:09.602881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:24:09.603024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:24:09.603068 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:24:09.644757 systemd-resolved[2020]: Positive Trust Anchors: Apr 13 19:24:09.644798 systemd-resolved[2020]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:24:09.644867 systemd-resolved[2020]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:24:09.658699 systemd-resolved[2020]: Defaulting to hostname 'linux'. Apr 13 19:24:09.662346 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:24:09.665220 systemd[1]: Reached target network.target - Network. Apr 13 19:24:09.667344 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:24:09.669945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:24:09.672592 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:24:09.675324 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:24:09.678354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:24:09.681409 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:24:09.684032 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:24:09.686784 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:24:09.689645 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:24:09.689695 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:24:09.691744 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:24:09.695529 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:24:09.701187 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:24:09.706259 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:24:09.711242 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:24:09.713762 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:24:09.715987 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:24:09.718415 systemd[1]: System is tainted: cgroupsv1 Apr 13 19:24:09.718486 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:24:09.718531 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:24:09.722394 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:24:09.742601 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:24:09.748793 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:24:09.753917 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:24:09.769525 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:24:09.773420 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:24:09.791734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:09.804463 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:24:09.824479 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 19:24:09.841814 jq[2083]: false Apr 13 19:24:09.848466 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:24:09.867364 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:24:09.888450 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 19:24:09.909197 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:24:09.923476 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:24:09.938575 extend-filesystems[2085]: Found loop4 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found loop5 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found loop6 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found loop7 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found nvme0n1 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found nvme0n1p1 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found nvme0n1p2 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found nvme0n1p3 Apr 13 19:24:09.938575 extend-filesystems[2085]: Found usr Apr 13 19:24:09.938575 extend-filesystems[2085]: Found nvme0n1p4 Apr 13 19:24:09.969375 extend-filesystems[2085]: Found nvme0n1p6 Apr 13 19:24:09.969375 extend-filesystems[2085]: Found nvme0n1p7 Apr 13 19:24:09.969375 extend-filesystems[2085]: Found nvme0n1p9 Apr 13 19:24:09.969375 extend-filesystems[2085]: Checking size of /dev/nvme0n1p9 Apr 13 19:24:09.965081 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: ---------------------------------------------------- Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: corporation. Support and training for ntp-4 are Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: available at https://www.nwtime.org/support Apr 13 19:24:09.993468 ntpd[2091]: 13 Apr 19:24:09 ntpd[2091]: ---------------------------------------------------- Apr 13 19:24:09.990585 ntpd[2091]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:24:09.988714 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:24:09.990632 ntpd[2091]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:24:09.990652 ntpd[2091]: ---------------------------------------------------- Apr 13 19:24:09.990673 ntpd[2091]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:24:09.990691 ntpd[2091]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:24:09.990710 ntpd[2091]: corporation. Support and training for ntp-4 are Apr 13 19:24:09.990730 ntpd[2091]: available at https://www.nwtime.org/support Apr 13 19:24:09.990772 ntpd[2091]: ---------------------------------------------------- Apr 13 19:24:10.001603 ntpd[2091]: proto: precision = 0.096 usec (-23) Apr 13 19:24:10.005378 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: proto: precision = 0.096 usec (-23) Apr 13 19:24:10.005378 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: basedate set to 2026-04-01 Apr 13 19:24:10.005378 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: gps base set to 2026-04-05 (week 2413) Apr 13 19:24:10.003090 ntpd[2091]: basedate set to 2026-04-01 Apr 13 19:24:10.003120 ntpd[2091]: gps base set to 2026-04-05 (week 2413) Apr 13 19:24:10.006319 ntpd[2091]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:24:10.008255 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:24:10.008440 ntpd[2091]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:24:10.008580 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:24:10.008916 ntpd[2091]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:24:10.009029 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:24:10.009234 ntpd[2091]: Listen normally on 3 eth0 172.31.17.121:123 Apr 13 19:24:10.009405 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: Listen normally on 3 eth0 172.31.17.121:123 Apr 13 19:24:10.009565 ntpd[2091]: Listen normally on 4 lo [::1]:123 Apr 13 19:24:10.009672 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: Listen normally on 4 lo [::1]:123 Apr 13 19:24:10.009835 ntpd[2091]: Listen normally on 5 eth0 [fe80::4ee:a7ff:fe14:92f9%2]:123 Apr 13 19:24:10.009958 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: Listen normally on 5 eth0 [fe80::4ee:a7ff:fe14:92f9%2]:123 Apr 13 19:24:10.010084 ntpd[2091]: Listening on routing socket on fd #22 for interface updates Apr 13 19:24:10.012257 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: Listening on routing socket on fd #22 for interface updates Apr 13 19:24:10.015412 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:24:10.026374 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:24:10.031007 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:10.032860 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:10.033014 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:10.033191 ntpd[2091]: 13 Apr 19:24:10 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:10.041884 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:24:10.045843 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:24:10.047121 dbus-daemon[2082]: [system] SELinux support is enabled Apr 13 19:24:10.085494 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:24:10.099085 update_engine[2110]: I20260413 19:24:10.098639 2110 main.cc:92] Flatcar Update Engine starting Apr 13 19:24:10.124123 update_engine[2110]: I20260413 19:24:10.115659 2110 update_check_scheduler.cc:74] Next update check in 7m59s Apr 13 19:24:10.105612 dbus-daemon[2082]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1689 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 19:24:10.169719 extend-filesystems[2085]: Resized partition /dev/nvme0n1p9 Apr 13 19:24:10.178143 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:24:10.179646 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:24:10.213343 extend-filesystems[2131]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:24:10.204402 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:24:10.235039 jq[2114]: true Apr 13 19:24:10.204935 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:24:10.240203 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 19:24:10.242974 (ntainerd)[2137]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:24:10.250653 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:24:10.250750 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:24:10.255103 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:24:10.255142 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:24:10.278075 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:24:10.280945 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 19:24:10.307725 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 19:24:10.315361 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:24:10.331858 coreos-metadata[2081]: Apr 13 19:24:10.330 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:24:10.330898 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:24:10.334637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:24:10.353951 coreos-metadata[2081]: Apr 13 19:24:10.345 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 19:24:10.353951 coreos-metadata[2081]: Apr 13 19:24:10.346 INFO Fetch successful Apr 13 19:24:10.353951 coreos-metadata[2081]: Apr 13 19:24:10.346 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 19:24:10.353951 coreos-metadata[2081]: Apr 13 19:24:10.351 INFO Fetch successful Apr 13 19:24:10.353951 coreos-metadata[2081]: Apr 13 19:24:10.351 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 19:24:10.354331 tar[2118]: linux-arm64/LICENSE Apr 13 19:24:10.365848 tar[2118]: linux-arm64/helm Apr 13 19:24:10.370454 coreos-metadata[2081]: Apr 13 19:24:10.370 INFO Fetch successful Apr 13 19:24:10.370454 coreos-metadata[2081]: Apr 13 19:24:10.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 19:24:10.378661 coreos-metadata[2081]: Apr 13 19:24:10.373 INFO Fetch successful Apr 13 19:24:10.378661 coreos-metadata[2081]: Apr 13 19:24:10.374 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 19:24:10.378661 coreos-metadata[2081]: Apr 13 19:24:10.378 INFO Fetch failed with 404: resource not found Apr 13 19:24:10.378661 coreos-metadata[2081]: Apr 13 19:24:10.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 19:24:10.378661 coreos-metadata[2081]: Apr 13 19:24:10.378 INFO Fetch successful Apr 13 19:24:10.378661 coreos-metadata[2081]: Apr 13 19:24:10.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 19:24:10.385406 coreos-metadata[2081]: Apr 13 19:24:10.384 INFO Fetch successful Apr 13 19:24:10.385406 coreos-metadata[2081]: Apr 13 19:24:10.384 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 19:24:10.388412 jq[2150]: true Apr 13 19:24:10.405055 coreos-metadata[2081]: Apr 13 19:24:10.391 INFO Fetch successful Apr 13 19:24:10.405055 coreos-metadata[2081]: Apr 13 19:24:10.391 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 19:24:10.405490 coreos-metadata[2081]: Apr 13 19:24:10.405 INFO Fetch successful Apr 13 19:24:10.405490 coreos-metadata[2081]: Apr 13 19:24:10.405 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 19:24:10.406029 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 19:24:10.422992 coreos-metadata[2081]: Apr 13 19:24:10.416 INFO Fetch successful Apr 13 19:24:10.423711 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 19:24:10.486759 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 19:24:10.526019 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:24:10.528945 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:24:10.532789 extend-filesystems[2131]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 19:24:10.532789 extend-filesystems[2131]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 19:24:10.532789 extend-filesystems[2131]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 19:24:10.548097 extend-filesystems[2085]: Resized filesystem in /dev/nvme0n1p9 Apr 13 19:24:10.536791 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:24:10.537349 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:24:10.609347 systemd-logind[2102]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:24:10.610333 systemd-logind[2102]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 13 19:24:10.612827 systemd-logind[2102]: New seat seat0. Apr 13 19:24:10.615047 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:24:10.761833 bash[2209]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:24:10.759034 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: Initializing new seelog logger Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: New Seelog Logger Creation Complete Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 processing appconfig overrides Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.839259 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 processing appconfig overrides Apr 13 19:24:10.894254 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2189) Apr 13 19:24:10.891262 systemd[1]: Starting sshkeys.service... Apr 13 19:24:10.894395 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.894395 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.894395 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 processing appconfig overrides Apr 13 19:24:10.894395 amazon-ssm-agent[2163]: 2026-04-13 19:24:10 INFO Proxy environment variables: Apr 13 19:24:10.894395 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.894395 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:10.894395 amazon-ssm-agent[2163]: 2026/04/13 19:24:10 processing appconfig overrides Apr 13 19:24:10.934451 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:24:10.944031 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:24:10.954845 amazon-ssm-agent[2163]: 2026-04-13 19:24:10 INFO https_proxy: Apr 13 19:24:11.015552 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:24:11.064205 amazon-ssm-agent[2163]: 2026-04-13 19:24:10 INFO http_proxy: Apr 13 19:24:11.100385 containerd[2137]: time="2026-04-13T19:24:11.099906694Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:24:11.146516 locksmithd[2155]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:24:11.161194 amazon-ssm-agent[2163]: 2026-04-13 19:24:10 INFO no_proxy: Apr 13 19:24:11.202013 containerd[2137]: time="2026-04-13T19:24:11.199978558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:11.213704 containerd[2137]: time="2026-04-13T19:24:11.212572007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:11.213704 containerd[2137]: time="2026-04-13T19:24:11.212696159Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:24:11.213704 containerd[2137]: time="2026-04-13T19:24:11.212756627Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:24:11.213704 containerd[2137]: time="2026-04-13T19:24:11.213086243Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:24:11.213704 containerd[2137]: time="2026-04-13T19:24:11.213120095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:11.216658 containerd[2137]: time="2026-04-13T19:24:11.216582659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:11.216658 containerd[2137]: time="2026-04-13T19:24:11.216648371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:11.217136 containerd[2137]: time="2026-04-13T19:24:11.217084295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:11.222318 containerd[2137]: time="2026-04-13T19:24:11.217131191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:11.222318 containerd[2137]: time="2026-04-13T19:24:11.221887271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:11.222318 containerd[2137]: time="2026-04-13T19:24:11.221934263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:11.222318 containerd[2137]: time="2026-04-13T19:24:11.222236087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:11.222742 containerd[2137]: time="2026-04-13T19:24:11.222675383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:11.225814 containerd[2137]: time="2026-04-13T19:24:11.225740579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:11.225814 containerd[2137]: time="2026-04-13T19:24:11.225805667Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:24:11.226302 containerd[2137]: time="2026-04-13T19:24:11.226029959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:24:11.233249 containerd[2137]: time="2026-04-13T19:24:11.226147451Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:24:11.244954 containerd[2137]: time="2026-04-13T19:24:11.244882679Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:24:11.245134 containerd[2137]: time="2026-04-13T19:24:11.245084351Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:24:11.245235 containerd[2137]: time="2026-04-13T19:24:11.245144411Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:24:11.245290 containerd[2137]: time="2026-04-13T19:24:11.245245979Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:24:11.245339 containerd[2137]: time="2026-04-13T19:24:11.245281811Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:24:11.245595 containerd[2137]: time="2026-04-13T19:24:11.245551967Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:24:11.246206 containerd[2137]: time="2026-04-13T19:24:11.246080759Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:24:11.246400 containerd[2137]: time="2026-04-13T19:24:11.246354779Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:24:11.246458 containerd[2137]: time="2026-04-13T19:24:11.246404087Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:24:11.246458 containerd[2137]: time="2026-04-13T19:24:11.246441947Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:24:11.246544 containerd[2137]: time="2026-04-13T19:24:11.246474143Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246544 containerd[2137]: time="2026-04-13T19:24:11.246504335Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246629 containerd[2137]: time="2026-04-13T19:24:11.246536303Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246629 containerd[2137]: time="2026-04-13T19:24:11.246569435Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246629 containerd[2137]: time="2026-04-13T19:24:11.246605603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246794 containerd[2137]: time="2026-04-13T19:24:11.246637631Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246794 containerd[2137]: time="2026-04-13T19:24:11.246669035Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246794 containerd[2137]: time="2026-04-13T19:24:11.246697331Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:24:11.246794 containerd[2137]: time="2026-04-13T19:24:11.246755819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.246963 containerd[2137]: time="2026-04-13T19:24:11.246791507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.246963 containerd[2137]: time="2026-04-13T19:24:11.246821483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.246963 containerd[2137]: time="2026-04-13T19:24:11.246853295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.246963 containerd[2137]: time="2026-04-13T19:24:11.246882623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.246963 containerd[2137]: time="2026-04-13T19:24:11.246913283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.246963 containerd[2137]: time="2026-04-13T19:24:11.246941147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.247249 containerd[2137]: time="2026-04-13T19:24:11.246971735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.247249 containerd[2137]: time="2026-04-13T19:24:11.247001339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.247249 containerd[2137]: time="2026-04-13T19:24:11.247037351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.247249 containerd[2137]: time="2026-04-13T19:24:11.247065671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.247249 containerd[2137]: time="2026-04-13T19:24:11.247102631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.247249 containerd[2137]: time="2026-04-13T19:24:11.247132487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250301387Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250396847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250431911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250460663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250764779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250805711Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250832435Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250861511Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250889663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250920839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250944491Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:24:11.255535 containerd[2137]: time="2026-04-13T19:24:11.250969511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:24:11.256140 amazon-ssm-agent[2163]: 2026-04-13 19:24:10 INFO Checking if agent identity type OnPrem can be assumed Apr 13 19:24:11.256236 containerd[2137]: time="2026-04-13T19:24:11.251708843Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:24:11.256236 containerd[2137]: time="2026-04-13T19:24:11.251843351Z" level=info msg="Connect containerd service" Apr 13 19:24:11.256236 containerd[2137]: time="2026-04-13T19:24:11.252066815Z" level=info msg="using legacy CRI server" Apr 13 19:24:11.256236 containerd[2137]: time="2026-04-13T19:24:11.252091727Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:24:11.256236 containerd[2137]: time="2026-04-13T19:24:11.255514991Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.258905675Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.259823639Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.259927199Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.264193619Z" level=info msg="Start subscribing containerd event" Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.264325631Z" level=info msg="Start recovering state" Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.264502559Z" level=info msg="Start event monitor" Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.265209191Z" level=info msg="Start snapshots syncer" Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.265239383Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:24:11.266039 containerd[2137]: time="2026-04-13T19:24:11.265648871Z" level=info msg="Start streaming server" Apr 13 19:24:11.266238 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:24:11.278293 containerd[2137]: time="2026-04-13T19:24:11.277229423Z" level=info msg="containerd successfully booted in 0.180125s" Apr 13 19:24:11.359267 amazon-ssm-agent[2163]: 2026-04-13 19:24:10 INFO Checking if agent identity type EC2 can be assumed Apr 13 19:24:11.455973 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO Agent will take identity from EC2 Apr 13 19:24:11.496789 coreos-metadata[2250]: Apr 13 19:24:11.496 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:24:11.510189 coreos-metadata[2250]: Apr 13 19:24:11.502 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 19:24:11.510189 coreos-metadata[2250]: Apr 13 19:24:11.509 INFO Fetch successful Apr 13 19:24:11.510189 coreos-metadata[2250]: Apr 13 19:24:11.509 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 19:24:11.515130 coreos-metadata[2250]: Apr 13 19:24:11.515 INFO Fetch successful Apr 13 19:24:11.542788 unknown[2250]: wrote ssh authorized keys file for user: core Apr 13 19:24:11.555954 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:24:11.616623 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 19:24:11.617948 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 19:24:11.625212 dbus-daemon[2082]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2153 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 19:24:11.627513 update-ssh-keys[2326]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:24:11.635267 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:24:11.657859 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 19:24:11.665485 systemd[1]: Finished sshkeys.service. Apr 13 19:24:11.670320 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:24:11.695546 polkitd[2330]: Started polkitd version 121 Apr 13 19:24:11.707108 polkitd[2330]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 19:24:11.707396 polkitd[2330]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 19:24:11.724347 polkitd[2330]: Finished loading, compiling and executing 2 rules Apr 13 19:24:11.729414 dbus-daemon[2082]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 19:24:11.729720 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 19:24:11.734968 polkitd[2330]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 19:24:11.771178 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:24:11.775886 systemd-hostnamed[2153]: Hostname set to (transient) Apr 13 19:24:11.775890 systemd-resolved[2020]: System hostname changed to 'ip-172-31-17-121'. Apr 13 19:24:11.874244 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 19:24:11.972528 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 13 19:24:12.074508 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 19:24:12.178354 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 19:24:12.280267 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [Registrar] Starting registrar module Apr 13 19:24:12.381591 amazon-ssm-agent[2163]: 2026-04-13 19:24:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 19:24:12.411184 sshd_keygen[2149]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:24:12.425081 amazon-ssm-agent[2163]: 2026-04-13 19:24:12 INFO [EC2Identity] EC2 registration was successful. Apr 13 19:24:12.426602 amazon-ssm-agent[2163]: 2026-04-13 19:24:12 INFO [CredentialRefresher] credentialRefresher has started Apr 13 19:24:12.426602 amazon-ssm-agent[2163]: 2026-04-13 19:24:12 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 19:24:12.426602 amazon-ssm-agent[2163]: 2026-04-13 19:24:12 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 19:24:12.473137 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:24:12.483381 amazon-ssm-agent[2163]: 2026-04-13 19:24:12 INFO [CredentialRefresher] Next credential rotation will be in 32.4749705126 minutes Apr 13 19:24:12.492742 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:24:12.503910 systemd[1]: Started sshd@0-172.31.17.121:22-4.175.71.9:36916.service - OpenSSH per-connection server daemon (4.175.71.9:36916). Apr 13 19:24:12.531765 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:24:12.532350 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:24:12.552932 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:24:12.602915 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:24:12.616507 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:24:12.630927 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 19:24:12.636118 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:24:12.650900 tar[2118]: linux-arm64/README.md Apr 13 19:24:12.681961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:24:13.454642 amazon-ssm-agent[2163]: 2026-04-13 19:24:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 19:24:13.540806 sshd[2353]: Accepted publickey for core from 4.175.71.9 port 36916 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:13.548797 sshd[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:13.557241 amazon-ssm-agent[2163]: 2026-04-13 19:24:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2370) started Apr 13 19:24:13.582275 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:24:13.599266 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:24:13.619478 systemd-logind[2102]: New session 1 of user core. Apr 13 19:24:13.653505 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:24:13.672938 amazon-ssm-agent[2163]: 2026-04-13 19:24:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 19:24:13.695752 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:24:13.709699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:13.715095 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:24:13.725907 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:13.726411 (systemd)[2386]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:24:13.967391 systemd[2386]: Queued start job for default target default.target. Apr 13 19:24:13.968659 systemd[2386]: Created slice app.slice - User Application Slice. Apr 13 19:24:13.969341 systemd[2386]: Reached target paths.target - Paths. Apr 13 19:24:13.969374 systemd[2386]: Reached target timers.target - Timers. Apr 13 19:24:13.980491 systemd[2386]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:24:13.997485 systemd[2386]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:24:13.997623 systemd[2386]: Reached target sockets.target - Sockets. Apr 13 19:24:13.997657 systemd[2386]: Reached target basic.target - Basic System. Apr 13 19:24:13.997760 systemd[2386]: Reached target default.target - Main User Target. Apr 13 19:24:13.997820 systemd[2386]: Startup finished in 256ms. Apr 13 19:24:13.998297 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:24:14.009770 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:24:14.013734 systemd[1]: Startup finished in 10.068s (kernel) + 10.363s (userspace) = 20.432s. Apr 13 19:24:14.731886 systemd[1]: Started sshd@1-172.31.17.121:22-4.175.71.9:36920.service - OpenSSH per-connection server daemon (4.175.71.9:36920). Apr 13 19:24:15.037456 kubelet[2389]: E0413 19:24:15.037300 2389 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:15.044467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:15.045623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:15.734764 sshd[2413]: Accepted publickey for core from 4.175.71.9 port 36920 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:15.737957 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:15.746205 systemd-logind[2102]: New session 2 of user core. Apr 13 19:24:15.758671 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:24:16.431513 sshd[2413]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:16.440569 systemd-logind[2102]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:24:16.441930 systemd[1]: sshd@1-172.31.17.121:22-4.175.71.9:36920.service: Deactivated successfully. Apr 13 19:24:16.448110 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:24:16.450499 systemd-logind[2102]: Removed session 2. Apr 13 19:24:16.599705 systemd[1]: Started sshd@2-172.31.17.121:22-4.175.71.9:35376.service - OpenSSH per-connection server daemon (4.175.71.9:35376). Apr 13 19:24:17.630202 sshd[2423]: Accepted publickey for core from 4.175.71.9 port 35376 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:17.632142 sshd[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:17.639591 systemd-logind[2102]: New session 3 of user core. Apr 13 19:24:17.652653 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:24:18.325501 sshd[2423]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:18.333002 systemd-logind[2102]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:24:18.333851 systemd[1]: sshd@2-172.31.17.121:22-4.175.71.9:35376.service: Deactivated successfully. Apr 13 19:24:18.338595 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:24:18.340435 systemd-logind[2102]: Removed session 3. Apr 13 19:24:18.507637 systemd[1]: Started sshd@3-172.31.17.121:22-4.175.71.9:35382.service - OpenSSH per-connection server daemon (4.175.71.9:35382). Apr 13 19:24:19.508287 sshd[2432]: Accepted publickey for core from 4.175.71.9 port 35382 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:19.510880 sshd[2432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:19.519582 systemd-logind[2102]: New session 4 of user core. Apr 13 19:24:19.526682 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:24:20.203298 sshd[2432]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:20.211773 systemd[1]: sshd@3-172.31.17.121:22-4.175.71.9:35382.service: Deactivated successfully. Apr 13 19:24:20.217112 systemd-logind[2102]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:24:20.217830 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:24:20.220902 systemd-logind[2102]: Removed session 4. Apr 13 19:24:20.358639 systemd[1]: Started sshd@4-172.31.17.121:22-4.175.71.9:35390.service - OpenSSH per-connection server daemon (4.175.71.9:35390). Apr 13 19:24:21.321094 sshd[2440]: Accepted publickey for core from 4.175.71.9 port 35390 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:21.322802 sshd[2440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:21.330253 systemd-logind[2102]: New session 5 of user core. Apr 13 19:24:21.343789 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:24:21.845409 sudo[2444]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:24:21.846075 sudo[2444]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:21.862419 sudo[2444]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:22.017650 sshd[2440]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:22.024788 systemd[1]: sshd@4-172.31.17.121:22-4.175.71.9:35390.service: Deactivated successfully. Apr 13 19:24:22.026744 systemd-logind[2102]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:24:22.033224 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:24:22.034974 systemd-logind[2102]: Removed session 5. Apr 13 19:24:22.181635 systemd[1]: Started sshd@5-172.31.17.121:22-4.175.71.9:35392.service - OpenSSH per-connection server daemon (4.175.71.9:35392). Apr 13 19:24:23.141204 sshd[2449]: Accepted publickey for core from 4.175.71.9 port 35392 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:23.143205 sshd[2449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:23.152247 systemd-logind[2102]: New session 6 of user core. Apr 13 19:24:23.161667 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:24:23.651942 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:24:23.653320 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:23.660322 sudo[2454]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:23.670865 sudo[2453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:24:23.671562 sudo[2453]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:23.695094 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:24:23.708118 auditctl[2457]: No rules Apr 13 19:24:23.709134 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:24:23.709664 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:24:23.724740 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:24:23.766457 augenrules[2476]: No rules Apr 13 19:24:23.769083 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:24:23.775141 sudo[2453]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:23.930488 sshd[2449]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:23.937583 systemd[1]: sshd@5-172.31.17.121:22-4.175.71.9:35392.service: Deactivated successfully. Apr 13 19:24:23.944619 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:24:23.946246 systemd-logind[2102]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:24:23.948808 systemd-logind[2102]: Removed session 6. Apr 13 19:24:24.096599 systemd[1]: Started sshd@6-172.31.17.121:22-4.175.71.9:35398.service - OpenSSH per-connection server daemon (4.175.71.9:35398). Apr 13 19:24:25.050766 sshd[2485]: Accepted publickey for core from 4.175.71.9 port 35398 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:25.053332 sshd[2485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:25.054637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:24:25.065520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:25.072567 systemd-logind[2102]: New session 7 of user core. Apr 13 19:24:25.078704 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:24:25.428519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:25.445841 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:25.517488 kubelet[2501]: E0413 19:24:25.517402 2501 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:25.526557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:25.526945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:25.560739 sudo[2509]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:24:25.561996 sudo[2509]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:26.060926 (dockerd)[2524]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:24:26.061247 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:24:26.474640 dockerd[2524]: time="2026-04-13T19:24:26.473467535Z" level=info msg="Starting up" Apr 13 19:24:26.606801 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1385249960-merged.mount: Deactivated successfully. Apr 13 19:24:26.709813 dockerd[2524]: time="2026-04-13T19:24:26.709042397Z" level=info msg="Loading containers: start." Apr 13 19:24:26.885193 kernel: Initializing XFRM netlink socket Apr 13 19:24:26.917769 (udev-worker)[2545]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:27.006778 systemd-networkd[1689]: docker0: Link UP Apr 13 19:24:27.039635 dockerd[2524]: time="2026-04-13T19:24:27.039584301Z" level=info msg="Loading containers: done." Apr 13 19:24:27.065959 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1769217527-merged.mount: Deactivated successfully. Apr 13 19:24:27.073089 dockerd[2524]: time="2026-04-13T19:24:27.072340634Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:24:27.073089 dockerd[2524]: time="2026-04-13T19:24:27.072538823Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:24:27.073089 dockerd[2524]: time="2026-04-13T19:24:27.072722823Z" level=info msg="Daemon has completed initialization" Apr 13 19:24:27.139865 dockerd[2524]: time="2026-04-13T19:24:27.138461314Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:24:27.138871 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:24:27.948875 containerd[2137]: time="2026-04-13T19:24:27.948820322Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 19:24:28.636671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856274400.mount: Deactivated successfully. Apr 13 19:24:30.155903 containerd[2137]: time="2026-04-13T19:24:30.155819837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:30.158119 containerd[2137]: time="2026-04-13T19:24:30.158063014Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=27283683" Apr 13 19:24:30.161207 containerd[2137]: time="2026-04-13T19:24:30.160343218Z" level=info msg="ImageCreate event name:\"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:30.166753 containerd[2137]: time="2026-04-13T19:24:30.166697699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:30.169111 containerd[2137]: time="2026-04-13T19:24:30.169048631Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"27280282\" in 2.218766217s" Apr 13 19:24:30.169264 containerd[2137]: time="2026-04-13T19:24:30.169111923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\"" Apr 13 19:24:30.170417 containerd[2137]: time="2026-04-13T19:24:30.170356924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 19:24:31.562282 containerd[2137]: time="2026-04-13T19:24:31.562213324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:31.563986 containerd[2137]: time="2026-04-13T19:24:31.563788856Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=23551902" Apr 13 19:24:31.565290 containerd[2137]: time="2026-04-13T19:24:31.565235213Z" level=info msg="ImageCreate event name:\"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:31.573215 containerd[2137]: time="2026-04-13T19:24:31.572193927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:31.575351 containerd[2137]: time="2026-04-13T19:24:31.574506071Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"25029924\" in 1.404082724s" Apr 13 19:24:31.575351 containerd[2137]: time="2026-04-13T19:24:31.574570503Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\"" Apr 13 19:24:31.575850 containerd[2137]: time="2026-04-13T19:24:31.575803174Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 19:24:32.778135 containerd[2137]: time="2026-04-13T19:24:32.778053456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.781386 containerd[2137]: time="2026-04-13T19:24:32.781326834Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=18301233" Apr 13 19:24:32.782346 containerd[2137]: time="2026-04-13T19:24:32.782273964Z" level=info msg="ImageCreate event name:\"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.788346 containerd[2137]: time="2026-04-13T19:24:32.788294892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:32.790757 containerd[2137]: time="2026-04-13T19:24:32.790692409Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"19779273\" in 1.214828666s" Apr 13 19:24:32.791059 containerd[2137]: time="2026-04-13T19:24:32.790916241Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\"" Apr 13 19:24:32.793144 containerd[2137]: time="2026-04-13T19:24:32.793081614Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 19:24:34.126665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834699710.mount: Deactivated successfully. Apr 13 19:24:34.756824 containerd[2137]: time="2026-04-13T19:24:34.756755276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:34.761925 containerd[2137]: time="2026-04-13T19:24:34.761857188Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=28148953" Apr 13 19:24:34.770136 containerd[2137]: time="2026-04-13T19:24:34.770052172Z" level=info msg="ImageCreate event name:\"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:34.775294 containerd[2137]: time="2026-04-13T19:24:34.775088177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:34.777227 containerd[2137]: time="2026-04-13T19:24:34.776424812Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"28147972\" in 1.983215582s" Apr 13 19:24:34.777227 containerd[2137]: time="2026-04-13T19:24:34.776484170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\"" Apr 13 19:24:34.777712 containerd[2137]: time="2026-04-13T19:24:34.777639397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 19:24:35.481018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819722492.mount: Deactivated successfully. Apr 13 19:24:35.777375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:24:35.788367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:36.213714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:36.229588 (kubelet)[2785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:36.320060 kubelet[2785]: E0413 19:24:36.319990 2785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:36.327590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:36.328046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:36.925995 containerd[2137]: time="2026-04-13T19:24:36.925925439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:36.928205 containerd[2137]: time="2026-04-13T19:24:36.928126338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 13 19:24:36.929329 containerd[2137]: time="2026-04-13T19:24:36.928485630Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:36.935201 containerd[2137]: time="2026-04-13T19:24:36.934978810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:36.939197 containerd[2137]: time="2026-04-13T19:24:36.937658065Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.159958123s" Apr 13 19:24:36.939197 containerd[2137]: time="2026-04-13T19:24:36.937719619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 13 19:24:36.939197 containerd[2137]: time="2026-04-13T19:24:36.938491876Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 19:24:37.483859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265747412.mount: Deactivated successfully. Apr 13 19:24:37.498248 containerd[2137]: time="2026-04-13T19:24:37.496898947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:37.499693 containerd[2137]: time="2026-04-13T19:24:37.499327636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 13 19:24:37.502457 containerd[2137]: time="2026-04-13T19:24:37.501806436Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:37.508326 containerd[2137]: time="2026-04-13T19:24:37.508264149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:37.509988 containerd[2137]: time="2026-04-13T19:24:37.509921888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 571.382204ms" Apr 13 19:24:37.510180 containerd[2137]: time="2026-04-13T19:24:37.509984233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 13 19:24:37.510875 containerd[2137]: time="2026-04-13T19:24:37.510684646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 19:24:38.125479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908450236.mount: Deactivated successfully. Apr 13 19:24:40.264189 containerd[2137]: time="2026-04-13T19:24:40.262035487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:40.265020 containerd[2137]: time="2026-04-13T19:24:40.264960752Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Apr 13 19:24:40.267519 containerd[2137]: time="2026-04-13T19:24:40.267459750Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:40.275122 containerd[2137]: time="2026-04-13T19:24:40.275042680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:40.277734 containerd[2137]: time="2026-04-13T19:24:40.277662842Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.766902225s" Apr 13 19:24:40.280223 containerd[2137]: time="2026-04-13T19:24:40.277923963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 13 19:24:41.813335 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 19:24:46.477111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:24:46.485635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:46.862466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:46.875899 (kubelet)[2913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:46.955645 kubelet[2913]: E0413 19:24:46.952852 2913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:46.957733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:46.958235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:47.710711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:47.726607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:47.790575 systemd[1]: Reloading requested from client PID 2929 ('systemctl') (unit session-7.scope)... Apr 13 19:24:47.790609 systemd[1]: Reloading... Apr 13 19:24:48.002226 zram_generator::config[2969]: No configuration found. Apr 13 19:24:48.309447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:48.483961 systemd[1]: Reloading finished in 692 ms. Apr 13 19:24:48.548835 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:24:48.549042 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:24:48.549669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:48.558812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:48.892477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:48.907864 (kubelet)[3041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:48.972231 kubelet[3041]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:48.972231 kubelet[3041]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:48.972231 kubelet[3041]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:48.972815 kubelet[3041]: I0413 19:24:48.972299 3041 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:50.413799 kubelet[3041]: I0413 19:24:50.413710 3041 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:24:50.413799 kubelet[3041]: I0413 19:24:50.413763 3041 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:50.414598 kubelet[3041]: I0413 19:24:50.414256 3041 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:50.465964 kubelet[3041]: I0413 19:24:50.465455 3041 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:50.465964 kubelet[3041]: E0413 19:24:50.465818 3041 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:50.476626 kubelet[3041]: E0413 19:24:50.476580 3041 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:50.476800 kubelet[3041]: I0413 19:24:50.476779 3041 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:50.483850 kubelet[3041]: I0413 19:24:50.483815 3041 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:24:50.486552 kubelet[3041]: I0413 19:24:50.486495 3041 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:50.486938 kubelet[3041]: I0413 19:24:50.486677 3041 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:24:50.487787 kubelet[3041]: I0413 19:24:50.487182 3041 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:50.487787 kubelet[3041]: I0413 19:24:50.487208 3041 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:24:50.487787 kubelet[3041]: I0413 19:24:50.487566 3041 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:50.493767 kubelet[3041]: I0413 19:24:50.493720 3041 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:24:50.493943 kubelet[3041]: I0413 19:24:50.493923 3041 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:50.494072 kubelet[3041]: I0413 19:24:50.494052 3041 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:24:50.496638 kubelet[3041]: I0413 19:24:50.496612 3041 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:50.498214 kubelet[3041]: E0413 19:24:50.498088 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-121&limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:50.500412 kubelet[3041]: E0413 19:24:50.500238 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:50.503192 kubelet[3041]: I0413 19:24:50.501705 3041 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:50.503192 kubelet[3041]: I0413 19:24:50.502855 3041 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:50.503192 kubelet[3041]: W0413 19:24:50.503104 3041 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:24:50.507955 kubelet[3041]: I0413 19:24:50.507924 3041 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:24:50.508132 kubelet[3041]: I0413 19:24:50.508115 3041 server.go:1289] "Started kubelet" Apr 13 19:24:50.513119 kubelet[3041]: I0413 19:24:50.513053 3041 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:50.514923 kubelet[3041]: I0413 19:24:50.514737 3041 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:24:50.516652 kubelet[3041]: I0413 19:24:50.516569 3041 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:50.518270 kubelet[3041]: I0413 19:24:50.517404 3041 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:50.521476 kubelet[3041]: E0413 19:24:50.519359 3041 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.121:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-121.18a601147a5c317f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-121,UID:ip-172-31-17-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-121,},FirstTimestamp:2026-04-13 19:24:50.508075391 +0000 UTC m=+1.593362942,LastTimestamp:2026-04-13 19:24:50.508075391 +0000 UTC m=+1.593362942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-121,}" Apr 13 19:24:50.525869 kubelet[3041]: I0413 19:24:50.525820 3041 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:50.529076 kubelet[3041]: I0413 19:24:50.528998 3041 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:50.534833 kubelet[3041]: I0413 19:24:50.534793 3041 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:24:50.535327 kubelet[3041]: E0413 19:24:50.535292 3041 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-121\" not found" Apr 13 19:24:50.537207 kubelet[3041]: I0413 19:24:50.536383 3041 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:24:50.537207 kubelet[3041]: I0413 19:24:50.536628 3041 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:24:50.538553 kubelet[3041]: E0413 19:24:50.538507 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:50.539349 kubelet[3041]: E0413 19:24:50.539298 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-121?timeout=10s\": dial tcp 172.31.17.121:6443: connect: connection refused" interval="200ms" Apr 13 19:24:50.540561 kubelet[3041]: I0413 19:24:50.540526 3041 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:50.540861 kubelet[3041]: I0413 19:24:50.540831 3041 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:50.543686 kubelet[3041]: I0413 19:24:50.543651 3041 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:50.577540 kubelet[3041]: E0413 19:24:50.577488 3041 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:50.579217 kubelet[3041]: I0413 19:24:50.579109 3041 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:50.581512 kubelet[3041]: I0413 19:24:50.581456 3041 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:50.581512 kubelet[3041]: I0413 19:24:50.581513 3041 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:24:50.581749 kubelet[3041]: I0413 19:24:50.581544 3041 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:50.581749 kubelet[3041]: I0413 19:24:50.581559 3041 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:24:50.581749 kubelet[3041]: E0413 19:24:50.581623 3041 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:50.587931 kubelet[3041]: E0413 19:24:50.587855 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:50.594451 kubelet[3041]: I0413 19:24:50.594398 3041 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:50.594675 kubelet[3041]: I0413 19:24:50.594636 3041 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:50.594840 kubelet[3041]: I0413 19:24:50.594821 3041 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:50.603274 kubelet[3041]: I0413 19:24:50.601502 3041 policy_none.go:49] "None policy: Start" Apr 13 19:24:50.603274 kubelet[3041]: I0413 19:24:50.601557 3041 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:24:50.603274 kubelet[3041]: I0413 19:24:50.601581 3041 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:24:50.613801 kubelet[3041]: E0413 19:24:50.613758 3041 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:50.614285 kubelet[3041]: I0413 19:24:50.614260 3041 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:50.614469 kubelet[3041]: I0413 19:24:50.614418 3041 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:50.618947 kubelet[3041]: I0413 19:24:50.618918 3041 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:50.620750 kubelet[3041]: E0413 19:24:50.620717 3041 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:50.620994 kubelet[3041]: E0413 19:24:50.620955 3041 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-121\" not found" Apr 13 19:24:50.694146 kubelet[3041]: E0413 19:24:50.694029 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:50.706492 kubelet[3041]: E0413 19:24:50.706438 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:50.711715 kubelet[3041]: E0413 19:24:50.711654 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:50.716485 kubelet[3041]: I0413 19:24:50.716426 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-121" Apr 13 19:24:50.717138 kubelet[3041]: E0413 19:24:50.717080 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.121:6443/api/v1/nodes\": dial tcp 172.31.17.121:6443: connect: connection refused" node="ip-172-31-17-121" Apr 13 19:24:50.738866 kubelet[3041]: I0413 19:24:50.738810 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:50.740368 kubelet[3041]: I0413 19:24:50.740306 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/111e3e4975509b86758a1059d2485591-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-121\" (UID: \"111e3e4975509b86758a1059d2485591\") " pod="kube-system/kube-scheduler-ip-172-31-17-121" Apr 13 19:24:50.740520 kubelet[3041]: I0413 19:24:50.740386 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78b7f21d43a8123cec59fd2a4d7f4a6f-ca-certs\") pod \"kube-apiserver-ip-172-31-17-121\" (UID: \"78b7f21d43a8123cec59fd2a4d7f4a6f\") " pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:24:50.740520 kubelet[3041]: I0413 19:24:50.740460 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78b7f21d43a8123cec59fd2a4d7f4a6f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-121\" (UID: \"78b7f21d43a8123cec59fd2a4d7f4a6f\") " pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:24:50.740520 kubelet[3041]: I0413 19:24:50.740514 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:50.740684 kubelet[3041]: I0413 19:24:50.740564 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:50.740684 kubelet[3041]: I0413 19:24:50.740610 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78b7f21d43a8123cec59fd2a4d7f4a6f-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-121\" (UID: \"78b7f21d43a8123cec59fd2a4d7f4a6f\") " pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:24:50.740684 kubelet[3041]: I0413 19:24:50.740658 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:50.740841 kubelet[3041]: I0413 19:24:50.740696 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:50.741896 kubelet[3041]: E0413 19:24:50.741350 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-121?timeout=10s\": dial tcp 172.31.17.121:6443: connect: connection refused" interval="400ms" Apr 13 19:24:50.897839 kubelet[3041]: E0413 19:24:50.897662 3041 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.121:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-121.18a601147a5c317f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-121,UID:ip-172-31-17-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-121,},FirstTimestamp:2026-04-13 19:24:50.508075391 +0000 UTC m=+1.593362942,LastTimestamp:2026-04-13 19:24:50.508075391 +0000 UTC m=+1.593362942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-121,}" Apr 13 19:24:50.919900 kubelet[3041]: I0413 19:24:50.919848 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-121" Apr 13 19:24:50.920384 kubelet[3041]: E0413 19:24:50.920339 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.121:6443/api/v1/nodes\": dial tcp 172.31.17.121:6443: connect: connection refused" node="ip-172-31-17-121" Apr 13 19:24:50.996737 containerd[2137]: time="2026-04-13T19:24:50.996675990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-121,Uid:78b7f21d43a8123cec59fd2a4d7f4a6f,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:51.008185 containerd[2137]: time="2026-04-13T19:24:51.007778068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-121,Uid:8c9632bf95b7627d8ca1f3299bf70989,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:51.013902 containerd[2137]: time="2026-04-13T19:24:51.013532777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-121,Uid:111e3e4975509b86758a1059d2485591,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:51.142316 kubelet[3041]: E0413 19:24:51.142256 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-121?timeout=10s\": dial tcp 172.31.17.121:6443: connect: connection refused" interval="800ms" Apr 13 19:24:51.324018 kubelet[3041]: I0413 19:24:51.323458 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-121" Apr 13 19:24:51.324416 kubelet[3041]: E0413 19:24:51.324367 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.121:6443/api/v1/nodes\": dial tcp 172.31.17.121:6443: connect: connection refused" node="ip-172-31-17-121" Apr 13 19:24:51.423609 kubelet[3041]: E0413 19:24:51.423555 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:51.472628 kubelet[3041]: E0413 19:24:51.472557 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:51.599117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651494785.mount: Deactivated successfully. Apr 13 19:24:51.614223 containerd[2137]: time="2026-04-13T19:24:51.613779993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:51.616054 containerd[2137]: time="2026-04-13T19:24:51.615982188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:51.618215 containerd[2137]: time="2026-04-13T19:24:51.617867536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 19:24:51.619939 containerd[2137]: time="2026-04-13T19:24:51.619889509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:51.622092 containerd[2137]: time="2026-04-13T19:24:51.622022858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:51.633737 containerd[2137]: time="2026-04-13T19:24:51.633652072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:51.635206 containerd[2137]: time="2026-04-13T19:24:51.633881278Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:51.642438 containerd[2137]: time="2026-04-13T19:24:51.642353659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:51.647331 containerd[2137]: time="2026-04-13T19:24:51.647259865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 633.620857ms" Apr 13 19:24:51.654392 containerd[2137]: time="2026-04-13T19:24:51.654311306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.514369ms" Apr 13 19:24:51.656279 containerd[2137]: time="2026-04-13T19:24:51.656215737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 648.330958ms" Apr 13 19:24:51.703186 kubelet[3041]: E0413 19:24:51.702759 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-121&limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:51.854319 containerd[2137]: time="2026-04-13T19:24:51.853568720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:51.854319 containerd[2137]: time="2026-04-13T19:24:51.853691190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:51.854319 containerd[2137]: time="2026-04-13T19:24:51.853727052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.855408 containerd[2137]: time="2026-04-13T19:24:51.854975508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:51.856437 containerd[2137]: time="2026-04-13T19:24:51.855579346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:51.856437 containerd[2137]: time="2026-04-13T19:24:51.855653577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.857857 containerd[2137]: time="2026-04-13T19:24:51.857671735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.859299 containerd[2137]: time="2026-04-13T19:24:51.859071855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:51.859594 containerd[2137]: time="2026-04-13T19:24:51.859484952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:51.860250 containerd[2137]: time="2026-04-13T19:24:51.859565036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.860250 containerd[2137]: time="2026-04-13T19:24:51.859948916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.861051 containerd[2137]: time="2026-04-13T19:24:51.860795860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:51.945780 kubelet[3041]: E0413 19:24:51.945352 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-121?timeout=10s\": dial tcp 172.31.17.121:6443: connect: connection refused" interval="1.6s" Apr 13 19:24:52.014537 containerd[2137]: time="2026-04-13T19:24:52.014422485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-121,Uid:8c9632bf95b7627d8ca1f3299bf70989,Namespace:kube-system,Attempt:0,} returns sandbox id \"5523737d95dfe404f93e8048db53832de0d8b9e524f486dc05f21ec9be465a73\"" Apr 13 19:24:52.029070 containerd[2137]: time="2026-04-13T19:24:52.027964611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-121,Uid:111e3e4975509b86758a1059d2485591,Namespace:kube-system,Attempt:0,} returns sandbox id \"986c55709bae36afe794d4c44f4c6dc0c59d0664fd27e899510dc45e386b2f38\"" Apr 13 19:24:52.029070 containerd[2137]: time="2026-04-13T19:24:52.028908490Z" level=info msg="CreateContainer within sandbox \"5523737d95dfe404f93e8048db53832de0d8b9e524f486dc05f21ec9be465a73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:24:52.038642 containerd[2137]: time="2026-04-13T19:24:52.038588968Z" level=info msg="CreateContainer within sandbox \"986c55709bae36afe794d4c44f4c6dc0c59d0664fd27e899510dc45e386b2f38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:24:52.049496 containerd[2137]: time="2026-04-13T19:24:52.049448215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-121,Uid:78b7f21d43a8123cec59fd2a4d7f4a6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c95dab6195058b0104fb9a56bfd5646d6e8613ea1e88f1fcc86404cd7d2fc8b\"" Apr 13 19:24:52.061678 containerd[2137]: time="2026-04-13T19:24:52.061625208Z" level=info msg="CreateContainer within sandbox \"1c95dab6195058b0104fb9a56bfd5646d6e8613ea1e88f1fcc86404cd7d2fc8b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:24:52.083019 containerd[2137]: time="2026-04-13T19:24:52.082692633Z" level=info msg="CreateContainer within sandbox \"5523737d95dfe404f93e8048db53832de0d8b9e524f486dc05f21ec9be465a73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8ec3cccbf70982c77828a6a6a7b7919e9030441ed00e1d10c7eeca3c0794db8\"" Apr 13 19:24:52.084232 containerd[2137]: time="2026-04-13T19:24:52.083860369Z" level=info msg="StartContainer for \"e8ec3cccbf70982c77828a6a6a7b7919e9030441ed00e1d10c7eeca3c0794db8\"" Apr 13 19:24:52.089704 containerd[2137]: time="2026-04-13T19:24:52.089403228Z" level=info msg="CreateContainer within sandbox \"986c55709bae36afe794d4c44f4c6dc0c59d0664fd27e899510dc45e386b2f38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"29527e7f7552a7f3dc0b521e8cb6762b33160baff36a591b19634ff9650dfaa2\"" Apr 13 19:24:52.091211 containerd[2137]: time="2026-04-13T19:24:52.090191750Z" level=info msg="StartContainer for \"29527e7f7552a7f3dc0b521e8cb6762b33160baff36a591b19634ff9650dfaa2\"" Apr 13 19:24:52.109958 containerd[2137]: time="2026-04-13T19:24:52.109794935Z" level=info msg="CreateContainer within sandbox \"1c95dab6195058b0104fb9a56bfd5646d6e8613ea1e88f1fcc86404cd7d2fc8b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f18ff196cfcd6a0e2ec8f2445c163d208b7f0149ebb51f0cd6d7b3a3da4d4d7\"" Apr 13 19:24:52.113479 containerd[2137]: time="2026-04-13T19:24:52.113428062Z" level=info msg="StartContainer for \"8f18ff196cfcd6a0e2ec8f2445c163d208b7f0149ebb51f0cd6d7b3a3da4d4d7\"" Apr 13 19:24:52.127629 kubelet[3041]: I0413 19:24:52.127593 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-121" Apr 13 19:24:52.128916 kubelet[3041]: E0413 19:24:52.128855 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:52.129512 kubelet[3041]: E0413 19:24:52.129144 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.121:6443/api/v1/nodes\": dial tcp 172.31.17.121:6443: connect: connection refused" node="ip-172-31-17-121" Apr 13 19:24:52.328661 containerd[2137]: time="2026-04-13T19:24:52.328404992Z" level=info msg="StartContainer for \"8f18ff196cfcd6a0e2ec8f2445c163d208b7f0149ebb51f0cd6d7b3a3da4d4d7\" returns successfully" Apr 13 19:24:52.330919 containerd[2137]: time="2026-04-13T19:24:52.330754317Z" level=info msg="StartContainer for \"e8ec3cccbf70982c77828a6a6a7b7919e9030441ed00e1d10c7eeca3c0794db8\" returns successfully" Apr 13 19:24:52.330919 containerd[2137]: time="2026-04-13T19:24:52.330877291Z" level=info msg="StartContainer for \"29527e7f7552a7f3dc0b521e8cb6762b33160baff36a591b19634ff9650dfaa2\" returns successfully" Apr 13 19:24:52.482589 kubelet[3041]: E0413 19:24:52.481679 3041 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.121:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:52.611264 kubelet[3041]: E0413 19:24:52.610867 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:52.627886 kubelet[3041]: E0413 19:24:52.627829 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:52.633710 kubelet[3041]: E0413 19:24:52.633659 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:53.639211 kubelet[3041]: E0413 19:24:53.639125 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:53.641736 kubelet[3041]: E0413 19:24:53.641688 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:53.733012 kubelet[3041]: I0413 19:24:53.732912 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-121" Apr 13 19:24:55.598189 update_engine[2110]: I20260413 19:24:55.595198 2110 update_attempter.cc:509] Updating boot flags... Apr 13 19:24:55.803204 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3327) Apr 13 19:24:56.498181 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3327) Apr 13 19:24:57.392318 kubelet[3041]: E0413 19:24:57.392251 3041 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-121\" not found" node="ip-172-31-17-121" Apr 13 19:24:57.507199 kubelet[3041]: I0413 19:24:57.506383 3041 apiserver.go:52] "Watching apiserver" Apr 13 19:24:57.535754 kubelet[3041]: I0413 19:24:57.535692 3041 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:24:57.592556 kubelet[3041]: I0413 19:24:57.592448 3041 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-121" Apr 13 19:24:57.592556 kubelet[3041]: E0413 19:24:57.592511 3041 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-121\": node \"ip-172-31-17-121\" not found" Apr 13 19:24:57.639850 kubelet[3041]: I0413 19:24:57.639783 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:24:57.670841 kubelet[3041]: E0413 19:24:57.670672 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:24:57.670841 kubelet[3041]: I0413 19:24:57.670729 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:57.685564 kubelet[3041]: E0413 19:24:57.685487 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:57.685564 kubelet[3041]: I0413 19:24:57.685538 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-121" Apr 13 19:24:57.701325 kubelet[3041]: E0413 19:24:57.701261 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-121" Apr 13 19:24:58.970770 kubelet[3041]: I0413 19:24:58.969366 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:24:59.912084 systemd[1]: Reloading requested from client PID 3499 ('systemctl') (unit session-7.scope)... Apr 13 19:24:59.912117 systemd[1]: Reloading... Apr 13 19:25:00.065376 zram_generator::config[3539]: No configuration found. Apr 13 19:25:00.334477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:25:00.528225 systemd[1]: Reloading finished in 615 ms. Apr 13 19:25:00.594727 kubelet[3041]: I0413 19:25:00.594548 3041 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:25:00.596016 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:00.621050 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:25:00.622312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:00.633941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:01.048574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:01.068605 (kubelet)[3609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:25:01.192650 kubelet[3609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:25:01.192650 kubelet[3609]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:25:01.192650 kubelet[3609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:25:01.193321 kubelet[3609]: I0413 19:25:01.192756 3609 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:25:01.209439 kubelet[3609]: I0413 19:25:01.209366 3609 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:25:01.209439 kubelet[3609]: I0413 19:25:01.209418 3609 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:25:01.209907 kubelet[3609]: I0413 19:25:01.209867 3609 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:25:01.212431 kubelet[3609]: I0413 19:25:01.212369 3609 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:25:01.214987 sudo[3622]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 19:25:01.216494 sudo[3622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 19:25:01.217946 kubelet[3609]: I0413 19:25:01.217470 3609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:25:01.227406 kubelet[3609]: E0413 19:25:01.227358 3609 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:25:01.228595 kubelet[3609]: I0413 19:25:01.227585 3609 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:25:01.243245 kubelet[3609]: I0413 19:25:01.242611 3609 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:25:01.244463 kubelet[3609]: I0413 19:25:01.244028 3609 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:25:01.244463 kubelet[3609]: I0413 19:25:01.244097 3609 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 19:25:01.244463 kubelet[3609]: I0413 19:25:01.244427 3609 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:25:01.244463 kubelet[3609]: I0413 19:25:01.244447 3609 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:25:01.244810 kubelet[3609]: I0413 19:25:01.244592 3609 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:25:01.245735 kubelet[3609]: I0413 19:25:01.244873 3609 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:25:01.245735 kubelet[3609]: I0413 19:25:01.244950 3609 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:25:01.245735 kubelet[3609]: I0413 19:25:01.245117 3609 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:25:01.245735 kubelet[3609]: I0413 19:25:01.245287 3609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:25:01.253297 kubelet[3609]: I0413 19:25:01.253243 3609 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:25:01.255972 kubelet[3609]: I0413 19:25:01.255239 3609 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:25:01.265779 kubelet[3609]: I0413 19:25:01.265219 3609 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:25:01.265779 kubelet[3609]: I0413 19:25:01.265318 3609 server.go:1289] "Started kubelet" Apr 13 19:25:01.277265 kubelet[3609]: I0413 19:25:01.277218 3609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:25:01.284787 kubelet[3609]: I0413 19:25:01.284713 3609 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:25:01.287917 kubelet[3609]: I0413 19:25:01.287091 3609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:25:01.287917 kubelet[3609]: I0413 19:25:01.287649 3609 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:25:01.296294 kubelet[3609]: I0413 19:25:01.296225 3609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:25:01.304321 kubelet[3609]: I0413 19:25:01.303019 3609 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:25:01.304321 kubelet[3609]: E0413 19:25:01.303387 3609 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-121\" not found" Apr 13 19:25:01.304891 kubelet[3609]: I0413 19:25:01.304847 3609 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:25:01.305113 kubelet[3609]: I0413 19:25:01.305078 3609 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:25:01.342710 kubelet[3609]: I0413 19:25:01.342646 3609 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:25:01.349789 kubelet[3609]: I0413 19:25:01.349717 3609 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:25:01.367809 kubelet[3609]: I0413 19:25:01.366180 3609 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:25:01.373894 kubelet[3609]: I0413 19:25:01.373847 3609 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:25:01.373894 kubelet[3609]: I0413 19:25:01.373882 3609 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:25:01.413918 kubelet[3609]: I0413 19:25:01.413876 3609 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:25:01.414650 kubelet[3609]: I0413 19:25:01.414098 3609 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:25:01.414650 kubelet[3609]: I0413 19:25:01.414144 3609 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:25:01.414650 kubelet[3609]: I0413 19:25:01.414215 3609 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:25:01.414650 kubelet[3609]: E0413 19:25:01.414294 3609 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:25:01.515591 kubelet[3609]: E0413 19:25:01.515554 3609 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 19:25:01.554819 kubelet[3609]: I0413 19:25:01.554709 3609 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.554954 3609 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.554993 3609 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.555253 3609 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.555274 3609 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.555307 3609 policy_none.go:49] "None policy: Start" Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.555325 3609 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.555345 3609 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:25:01.556371 kubelet[3609]: I0413 19:25:01.555510 3609 state_mem.go:75] "Updated machine memory state" Apr 13 19:25:01.561236 kubelet[3609]: E0413 19:25:01.561196 3609 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:25:01.561636 kubelet[3609]: I0413 19:25:01.561614 3609 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:25:01.562218 kubelet[3609]: I0413 19:25:01.561749 3609 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:25:01.564591 kubelet[3609]: I0413 19:25:01.564562 3609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:25:01.571228 kubelet[3609]: E0413 19:25:01.571056 3609 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:25:01.681273 kubelet[3609]: I0413 19:25:01.681225 3609 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-121" Apr 13 19:25:01.698848 kubelet[3609]: I0413 19:25:01.698793 3609 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-121" Apr 13 19:25:01.698975 kubelet[3609]: I0413 19:25:01.698916 3609 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-121" Apr 13 19:25:01.717576 kubelet[3609]: I0413 19:25:01.717460 3609 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:25:01.720328 kubelet[3609]: I0413 19:25:01.719821 3609 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-121" Apr 13 19:25:01.721897 kubelet[3609]: I0413 19:25:01.721818 3609 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:25:01.744431 kubelet[3609]: E0413 19:25:01.744367 3609 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-121\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:25:01.812462 kubelet[3609]: I0413 19:25:01.812316 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/78b7f21d43a8123cec59fd2a4d7f4a6f-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-121\" (UID: \"78b7f21d43a8123cec59fd2a4d7f4a6f\") " pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:25:01.812462 kubelet[3609]: I0413 19:25:01.812386 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/78b7f21d43a8123cec59fd2a4d7f4a6f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-121\" (UID: \"78b7f21d43a8123cec59fd2a4d7f4a6f\") " pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:25:01.812462 kubelet[3609]: I0413 19:25:01.812432 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:25:01.812675 kubelet[3609]: I0413 19:25:01.812469 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:25:01.812675 kubelet[3609]: I0413 19:25:01.812505 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:25:01.812675 kubelet[3609]: I0413 19:25:01.812557 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:25:01.812675 kubelet[3609]: I0413 19:25:01.812602 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/111e3e4975509b86758a1059d2485591-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-121\" (UID: \"111e3e4975509b86758a1059d2485591\") " pod="kube-system/kube-scheduler-ip-172-31-17-121" Apr 13 19:25:01.812675 kubelet[3609]: I0413 19:25:01.812638 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/78b7f21d43a8123cec59fd2a4d7f4a6f-ca-certs\") pod \"kube-apiserver-ip-172-31-17-121\" (UID: \"78b7f21d43a8123cec59fd2a4d7f4a6f\") " pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:25:01.814097 kubelet[3609]: I0413 19:25:01.812678 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c9632bf95b7627d8ca1f3299bf70989-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-121\" (UID: \"8c9632bf95b7627d8ca1f3299bf70989\") " pod="kube-system/kube-controller-manager-ip-172-31-17-121" Apr 13 19:25:02.245472 sudo[3622]: pam_unix(sudo:session): session closed for user root Apr 13 19:25:02.250524 kubelet[3609]: I0413 19:25:02.250375 3609 apiserver.go:52] "Watching apiserver" Apr 13 19:25:02.306020 kubelet[3609]: I0413 19:25:02.305937 3609 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:25:02.499678 kubelet[3609]: I0413 19:25:02.499502 3609 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:25:02.516620 kubelet[3609]: E0413 19:25:02.516548 3609 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-121\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-121" Apr 13 19:25:02.541311 kubelet[3609]: I0413 19:25:02.541203 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-121" podStartSLOduration=1.5411803160000002 podStartE2EDuration="1.541180316s" podCreationTimestamp="2026-04-13 19:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:02.519505539 +0000 UTC m=+1.438032317" watchObservedRunningTime="2026-04-13 19:25:02.541180316 +0000 UTC m=+1.459707106" Apr 13 19:25:02.586197 kubelet[3609]: I0413 19:25:02.584659 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-121" podStartSLOduration=1.584635886 podStartE2EDuration="1.584635886s" podCreationTimestamp="2026-04-13 19:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:02.565303669 +0000 UTC m=+1.483830447" watchObservedRunningTime="2026-04-13 19:25:02.584635886 +0000 UTC m=+1.503162664" Apr 13 19:25:05.527301 kubelet[3609]: I0413 19:25:05.527239 3609 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:25:05.533545 kubelet[3609]: I0413 19:25:05.528100 3609 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:25:05.533626 containerd[2137]: time="2026-04-13T19:25:05.527754007Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:25:05.571364 sudo[2509]: pam_unix(sudo:session): session closed for user root Apr 13 19:25:05.726512 sshd[2485]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:05.733930 systemd-logind[2102]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:25:05.735284 systemd[1]: sshd@6-172.31.17.121:22-4.175.71.9:35398.service: Deactivated successfully. Apr 13 19:25:05.745382 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:25:05.748702 systemd-logind[2102]: Removed session 7. Apr 13 19:25:06.748747 kubelet[3609]: I0413 19:25:06.746422 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-bpf-maps\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.748747 kubelet[3609]: I0413 19:25:06.747269 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-lib-modules\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.748747 kubelet[3609]: I0413 19:25:06.747321 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-xtables-lock\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.748747 kubelet[3609]: I0413 19:25:06.747372 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-net\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.748747 kubelet[3609]: I0413 19:25:06.747419 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-kernel\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.748747 kubelet[3609]: I0413 19:25:06.747462 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hubble-tls\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.750286 kubelet[3609]: I0413 19:25:06.747509 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3265ec37-c3ba-407f-a53c-ee08140897c3-xtables-lock\") pod \"kube-proxy-7bnv8\" (UID: \"3265ec37-c3ba-407f-a53c-ee08140897c3\") " pod="kube-system/kube-proxy-7bnv8" Apr 13 19:25:06.750286 kubelet[3609]: I0413 19:25:06.747545 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-cgroup\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.750286 kubelet[3609]: I0413 19:25:06.747595 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-etc-cni-netd\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.750286 kubelet[3609]: I0413 19:25:06.747634 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtnjc\" (UniqueName: \"kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-kube-api-access-gtnjc\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.753028 kubelet[3609]: I0413 19:25:06.747670 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3265ec37-c3ba-407f-a53c-ee08140897c3-lib-modules\") pod \"kube-proxy-7bnv8\" (UID: \"3265ec37-c3ba-407f-a53c-ee08140897c3\") " pod="kube-system/kube-proxy-7bnv8" Apr 13 19:25:06.753028 kubelet[3609]: I0413 19:25:06.753015 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hostproc\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.753028 kubelet[3609]: I0413 19:25:06.753062 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-clustermesh-secrets\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.754857 kubelet[3609]: I0413 19:25:06.753102 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-config-path\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.754857 kubelet[3609]: I0413 19:25:06.753148 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3265ec37-c3ba-407f-a53c-ee08140897c3-kube-proxy\") pod \"kube-proxy-7bnv8\" (UID: \"3265ec37-c3ba-407f-a53c-ee08140897c3\") " pod="kube-system/kube-proxy-7bnv8" Apr 13 19:25:06.754857 kubelet[3609]: I0413 19:25:06.754303 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7zs\" (UniqueName: \"kubernetes.io/projected/3265ec37-c3ba-407f-a53c-ee08140897c3-kube-api-access-7b7zs\") pod \"kube-proxy-7bnv8\" (UID: \"3265ec37-c3ba-407f-a53c-ee08140897c3\") " pod="kube-system/kube-proxy-7bnv8" Apr 13 19:25:06.754857 kubelet[3609]: I0413 19:25:06.754351 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-run\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.754857 kubelet[3609]: I0413 19:25:06.754389 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cni-path\") pod \"cilium-8n86d\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " pod="kube-system/cilium-8n86d" Apr 13 19:25:06.856193 kubelet[3609]: I0413 19:25:06.855134 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zk5\" (UniqueName: \"kubernetes.io/projected/f07f704c-94aa-4657-bd9e-7b7059b7ffad-kube-api-access-49zk5\") pod \"cilium-operator-6c4d7847fc-scb9x\" (UID: \"f07f704c-94aa-4657-bd9e-7b7059b7ffad\") " pod="kube-system/cilium-operator-6c4d7847fc-scb9x" Apr 13 19:25:06.856193 kubelet[3609]: I0413 19:25:06.855307 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f07f704c-94aa-4657-bd9e-7b7059b7ffad-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-scb9x\" (UID: \"f07f704c-94aa-4657-bd9e-7b7059b7ffad\") " pod="kube-system/cilium-operator-6c4d7847fc-scb9x" Apr 13 19:25:06.967591 containerd[2137]: time="2026-04-13T19:25:06.967527949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7bnv8,Uid:3265ec37-c3ba-407f-a53c-ee08140897c3,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:06.986500 containerd[2137]: time="2026-04-13T19:25:06.986422672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8n86d,Uid:53f259e2-41c9-4fd1-9704-8e6e2fdebb37,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:07.034137 containerd[2137]: time="2026-04-13T19:25:07.033708802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:07.035801 containerd[2137]: time="2026-04-13T19:25:07.035706091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:07.036205 containerd[2137]: time="2026-04-13T19:25:07.035827866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:07.036205 containerd[2137]: time="2026-04-13T19:25:07.036027950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:07.052074 containerd[2137]: time="2026-04-13T19:25:07.051819286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:07.053104 containerd[2137]: time="2026-04-13T19:25:07.051957769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:07.053706 containerd[2137]: time="2026-04-13T19:25:07.053081822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:07.054042 containerd[2137]: time="2026-04-13T19:25:07.053887891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:07.080701 containerd[2137]: time="2026-04-13T19:25:07.080626044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-scb9x,Uid:f07f704c-94aa-4657-bd9e-7b7059b7ffad,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:07.154132 containerd[2137]: time="2026-04-13T19:25:07.154063459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7bnv8,Uid:3265ec37-c3ba-407f-a53c-ee08140897c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb572611fd4e578b52cd3e06ed1a41333f0aff90d641ecdadde3ae489d3e24d0\"" Apr 13 19:25:07.173170 containerd[2137]: time="2026-04-13T19:25:07.172973571Z" level=info msg="CreateContainer within sandbox \"fb572611fd4e578b52cd3e06ed1a41333f0aff90d641ecdadde3ae489d3e24d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:25:07.176208 containerd[2137]: time="2026-04-13T19:25:07.173927202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:07.176471 containerd[2137]: time="2026-04-13T19:25:07.176129900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:07.176471 containerd[2137]: time="2026-04-13T19:25:07.176188874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:07.176471 containerd[2137]: time="2026-04-13T19:25:07.176369384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:07.178361 containerd[2137]: time="2026-04-13T19:25:07.178297168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8n86d,Uid:53f259e2-41c9-4fd1-9704-8e6e2fdebb37,Namespace:kube-system,Attempt:0,} returns sandbox id \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\"" Apr 13 19:25:07.183278 containerd[2137]: time="2026-04-13T19:25:07.183031032Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 19:25:07.224722 containerd[2137]: time="2026-04-13T19:25:07.224633972Z" level=info msg="CreateContainer within sandbox \"fb572611fd4e578b52cd3e06ed1a41333f0aff90d641ecdadde3ae489d3e24d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad1abcd48779ce07023f809121ff2edc9ddb25ba102874eecb5c1746fa334a78\"" Apr 13 19:25:07.229733 containerd[2137]: time="2026-04-13T19:25:07.229674043Z" level=info msg="StartContainer for \"ad1abcd48779ce07023f809121ff2edc9ddb25ba102874eecb5c1746fa334a78\"" Apr 13 19:25:07.284658 containerd[2137]: time="2026-04-13T19:25:07.284505405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-scb9x,Uid:f07f704c-94aa-4657-bd9e-7b7059b7ffad,Namespace:kube-system,Attempt:0,} returns sandbox id \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\"" Apr 13 19:25:07.384092 containerd[2137]: time="2026-04-13T19:25:07.384020379Z" level=info msg="StartContainer for \"ad1abcd48779ce07023f809121ff2edc9ddb25ba102874eecb5c1746fa334a78\" returns successfully" Apr 13 19:25:07.537702 kubelet[3609]: I0413 19:25:07.537429 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7bnv8" podStartSLOduration=1.537406824 podStartE2EDuration="1.537406824s" podCreationTimestamp="2026-04-13 19:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:07.535953823 +0000 UTC m=+6.454480613" watchObservedRunningTime="2026-04-13 19:25:07.537406824 +0000 UTC m=+6.455933602" Apr 13 19:25:12.253274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136848198.mount: Deactivated successfully. Apr 13 19:25:14.891481 containerd[2137]: time="2026-04-13T19:25:14.891401493Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:14.893655 containerd[2137]: time="2026-04-13T19:25:14.893529901Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 13 19:25:14.897133 containerd[2137]: time="2026-04-13T19:25:14.896509594Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:14.899833 containerd[2137]: time="2026-04-13T19:25:14.899777767Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.716666543s" Apr 13 19:25:14.900006 containerd[2137]: time="2026-04-13T19:25:14.899974625Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 13 19:25:14.904886 containerd[2137]: time="2026-04-13T19:25:14.904813628Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 19:25:14.914506 containerd[2137]: time="2026-04-13T19:25:14.913388726Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:25:14.944754 containerd[2137]: time="2026-04-13T19:25:14.944700479Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\"" Apr 13 19:25:14.946732 containerd[2137]: time="2026-04-13T19:25:14.946626499Z" level=info msg="StartContainer for \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\"" Apr 13 19:25:15.051573 containerd[2137]: time="2026-04-13T19:25:15.051497922Z" level=info msg="StartContainer for \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\" returns successfully" Apr 13 19:25:15.938422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb-rootfs.mount: Deactivated successfully. Apr 13 19:25:16.299779 containerd[2137]: time="2026-04-13T19:25:16.299421799Z" level=info msg="shim disconnected" id=4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb namespace=k8s.io Apr 13 19:25:16.299779 containerd[2137]: time="2026-04-13T19:25:16.299501715Z" level=warning msg="cleaning up after shim disconnected" id=4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb namespace=k8s.io Apr 13 19:25:16.299779 containerd[2137]: time="2026-04-13T19:25:16.299522285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:16.572393 containerd[2137]: time="2026-04-13T19:25:16.572107285Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:25:16.647594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320200219.mount: Deactivated successfully. Apr 13 19:25:16.668202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2536957226.mount: Deactivated successfully. Apr 13 19:25:16.687264 containerd[2137]: time="2026-04-13T19:25:16.687199347Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\"" Apr 13 19:25:16.690231 containerd[2137]: time="2026-04-13T19:25:16.688405883Z" level=info msg="StartContainer for \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\"" Apr 13 19:25:16.829255 containerd[2137]: time="2026-04-13T19:25:16.829042046Z" level=info msg="StartContainer for \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\" returns successfully" Apr 13 19:25:16.857149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:25:16.858105 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:25:16.858687 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:25:16.871106 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:25:16.914795 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:25:16.958360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2-rootfs.mount: Deactivated successfully. Apr 13 19:25:16.967916 containerd[2137]: time="2026-04-13T19:25:16.967645214Z" level=info msg="shim disconnected" id=acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2 namespace=k8s.io Apr 13 19:25:16.967916 containerd[2137]: time="2026-04-13T19:25:16.967739115Z" level=warning msg="cleaning up after shim disconnected" id=acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2 namespace=k8s.io Apr 13 19:25:16.967916 containerd[2137]: time="2026-04-13T19:25:16.967762192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:17.013021 containerd[2137]: time="2026-04-13T19:25:17.011148544Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:25:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:25:17.530072 containerd[2137]: time="2026-04-13T19:25:17.530014095Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:17.532507 containerd[2137]: time="2026-04-13T19:25:17.532448505Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 13 19:25:17.534806 containerd[2137]: time="2026-04-13T19:25:17.534732487Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:17.539310 containerd[2137]: time="2026-04-13T19:25:17.539236449Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.632969245s" Apr 13 19:25:17.539310 containerd[2137]: time="2026-04-13T19:25:17.539303232Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 13 19:25:17.547579 containerd[2137]: time="2026-04-13T19:25:17.547513941Z" level=info msg="CreateContainer within sandbox \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 19:25:17.578533 containerd[2137]: time="2026-04-13T19:25:17.578454647Z" level=info msg="CreateContainer within sandbox \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\"" Apr 13 19:25:17.588277 containerd[2137]: time="2026-04-13T19:25:17.585435335Z" level=info msg="StartContainer for \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\"" Apr 13 19:25:17.607969 containerd[2137]: time="2026-04-13T19:25:17.607900433Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:25:17.661515 containerd[2137]: time="2026-04-13T19:25:17.661443004Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\"" Apr 13 19:25:17.663481 containerd[2137]: time="2026-04-13T19:25:17.663409600Z" level=info msg="StartContainer for \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\"" Apr 13 19:25:17.746615 containerd[2137]: time="2026-04-13T19:25:17.746516830Z" level=info msg="StartContainer for \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\" returns successfully" Apr 13 19:25:17.815475 containerd[2137]: time="2026-04-13T19:25:17.813522619Z" level=info msg="StartContainer for \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\" returns successfully" Apr 13 19:25:17.959649 containerd[2137]: time="2026-04-13T19:25:17.959347050Z" level=info msg="shim disconnected" id=95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc namespace=k8s.io Apr 13 19:25:17.959649 containerd[2137]: time="2026-04-13T19:25:17.959510301Z" level=warning msg="cleaning up after shim disconnected" id=95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc namespace=k8s.io Apr 13 19:25:17.959649 containerd[2137]: time="2026-04-13T19:25:17.959531170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:18.609473 containerd[2137]: time="2026-04-13T19:25:18.609392935Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:25:18.672192 containerd[2137]: time="2026-04-13T19:25:18.670549477Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\"" Apr 13 19:25:18.674497 containerd[2137]: time="2026-04-13T19:25:18.674430256Z" level=info msg="StartContainer for \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\"" Apr 13 19:25:18.917732 kubelet[3609]: I0413 19:25:18.917262 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-scb9x" podStartSLOduration=2.664095853 podStartE2EDuration="12.917239683s" podCreationTimestamp="2026-04-13 19:25:06 +0000 UTC" firstStartedPulling="2026-04-13 19:25:07.287392637 +0000 UTC m=+6.205919391" lastFinishedPulling="2026-04-13 19:25:17.540536455 +0000 UTC m=+16.459063221" observedRunningTime="2026-04-13 19:25:18.797467143 +0000 UTC m=+17.715993933" watchObservedRunningTime="2026-04-13 19:25:18.917239683 +0000 UTC m=+17.835766473" Apr 13 19:25:18.985192 containerd[2137]: time="2026-04-13T19:25:18.983642989Z" level=info msg="StartContainer for \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\" returns successfully" Apr 13 19:25:19.079869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a-rootfs.mount: Deactivated successfully. Apr 13 19:25:19.083566 containerd[2137]: time="2026-04-13T19:25:19.083479162Z" level=info msg="shim disconnected" id=5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a namespace=k8s.io Apr 13 19:25:19.083566 containerd[2137]: time="2026-04-13T19:25:19.083556776Z" level=warning msg="cleaning up after shim disconnected" id=5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a namespace=k8s.io Apr 13 19:25:19.086253 containerd[2137]: time="2026-04-13T19:25:19.083592914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:19.617608 containerd[2137]: time="2026-04-13T19:25:19.617528290Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:25:19.678979 containerd[2137]: time="2026-04-13T19:25:19.678902224Z" level=info msg="CreateContainer within sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\"" Apr 13 19:25:19.684208 containerd[2137]: time="2026-04-13T19:25:19.683417965Z" level=info msg="StartContainer for \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\"" Apr 13 19:25:19.825628 containerd[2137]: time="2026-04-13T19:25:19.825554325Z" level=info msg="StartContainer for \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\" returns successfully" Apr 13 19:25:19.979577 kubelet[3609]: I0413 19:25:19.976492 3609 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 19:25:20.157895 kubelet[3609]: I0413 19:25:20.157703 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5tw\" (UniqueName: \"kubernetes.io/projected/b24138e8-e862-4177-880d-00a2fc3c4a43-kube-api-access-9j5tw\") pod \"coredns-674b8bbfcf-tmp8b\" (UID: \"b24138e8-e862-4177-880d-00a2fc3c4a43\") " pod="kube-system/coredns-674b8bbfcf-tmp8b" Apr 13 19:25:20.160301 kubelet[3609]: I0413 19:25:20.158058 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf27f760-66e2-489c-b16e-3efb5b6a747d-config-volume\") pod \"coredns-674b8bbfcf-8dhwv\" (UID: \"bf27f760-66e2-489c-b16e-3efb5b6a747d\") " pod="kube-system/coredns-674b8bbfcf-8dhwv" Apr 13 19:25:20.160301 kubelet[3609]: I0413 19:25:20.158405 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b24138e8-e862-4177-880d-00a2fc3c4a43-config-volume\") pod \"coredns-674b8bbfcf-tmp8b\" (UID: \"b24138e8-e862-4177-880d-00a2fc3c4a43\") " pod="kube-system/coredns-674b8bbfcf-tmp8b" Apr 13 19:25:20.160301 kubelet[3609]: I0413 19:25:20.158583 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwnrk\" (UniqueName: \"kubernetes.io/projected/bf27f760-66e2-489c-b16e-3efb5b6a747d-kube-api-access-pwnrk\") pod \"coredns-674b8bbfcf-8dhwv\" (UID: \"bf27f760-66e2-489c-b16e-3efb5b6a747d\") " pod="kube-system/coredns-674b8bbfcf-8dhwv" Apr 13 19:25:20.367965 containerd[2137]: time="2026-04-13T19:25:20.367488261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8dhwv,Uid:bf27f760-66e2-489c-b16e-3efb5b6a747d,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:20.376002 containerd[2137]: time="2026-04-13T19:25:20.375462651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tmp8b,Uid:b24138e8-e862-4177-880d-00a2fc3c4a43,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:20.676427 kubelet[3609]: I0413 19:25:20.675885 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8n86d" podStartSLOduration=6.955766289 podStartE2EDuration="14.675862937s" podCreationTimestamp="2026-04-13 19:25:06 +0000 UTC" firstStartedPulling="2026-04-13 19:25:07.181866138 +0000 UTC m=+6.100392904" lastFinishedPulling="2026-04-13 19:25:14.901962786 +0000 UTC m=+13.820489552" observedRunningTime="2026-04-13 19:25:20.674077894 +0000 UTC m=+19.592604672" watchObservedRunningTime="2026-04-13 19:25:20.675862937 +0000 UTC m=+19.594389703" Apr 13 19:25:22.945846 (udev-worker)[4410]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:22.952019 systemd-networkd[1689]: cilium_host: Link UP Apr 13 19:25:22.952658 systemd-networkd[1689]: cilium_net: Link UP Apr 13 19:25:22.952667 systemd-networkd[1689]: cilium_net: Gained carrier Apr 13 19:25:22.957648 systemd-networkd[1689]: cilium_host: Gained carrier Apr 13 19:25:22.963689 (udev-worker)[4447]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:23.148558 systemd-networkd[1689]: cilium_net: Gained IPv6LL Apr 13 19:25:23.152897 systemd-networkd[1689]: cilium_vxlan: Link UP Apr 13 19:25:23.152908 systemd-networkd[1689]: cilium_vxlan: Gained carrier Apr 13 19:25:23.733204 kernel: NET: Registered PF_ALG protocol family Apr 13 19:25:23.852363 systemd-networkd[1689]: cilium_host: Gained IPv6LL Apr 13 19:25:24.556408 systemd-networkd[1689]: cilium_vxlan: Gained IPv6LL Apr 13 19:25:25.070008 systemd-networkd[1689]: lxc_health: Link UP Apr 13 19:25:25.076471 (udev-worker)[4452]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:25.076681 systemd-networkd[1689]: lxc_health: Gained carrier Apr 13 19:25:25.546883 systemd-networkd[1689]: lxc64a249acb657: Link UP Apr 13 19:25:25.554228 kernel: eth0: renamed from tmp44d1e Apr 13 19:25:25.560429 systemd-networkd[1689]: lxc64a249acb657: Gained carrier Apr 13 19:25:25.609091 systemd-networkd[1689]: lxc32c70decdbd1: Link UP Apr 13 19:25:25.637234 kernel: eth0: renamed from tmp01ba2 Apr 13 19:25:25.656056 systemd-networkd[1689]: lxc32c70decdbd1: Gained carrier Apr 13 19:25:26.412425 systemd-networkd[1689]: lxc_health: Gained IPv6LL Apr 13 19:25:27.116838 systemd-networkd[1689]: lxc64a249acb657: Gained IPv6LL Apr 13 19:25:27.564513 systemd-networkd[1689]: lxc32c70decdbd1: Gained IPv6LL Apr 13 19:25:29.997563 ntpd[2091]: Listen normally on 6 cilium_host 192.168.0.160:123 Apr 13 19:25:30.000053 ntpd[2091]: 13 Apr 19:25:29 ntpd[2091]: Listen normally on 6 cilium_host 192.168.0.160:123 Apr 13 19:25:30.000053 ntpd[2091]: 13 Apr 19:25:29 ntpd[2091]: Listen normally on 7 cilium_net [fe80::b48d:ecff:fedf:18af%4]:123 Apr 13 19:25:30.000053 ntpd[2091]: 13 Apr 19:25:29 ntpd[2091]: Listen normally on 8 cilium_host [fe80::5013:f5ff:fe73:c6ef%5]:123 Apr 13 19:25:30.000053 ntpd[2091]: 13 Apr 19:25:29 ntpd[2091]: Listen normally on 9 cilium_vxlan [fe80::78a4:1dff:fe4d:104b%6]:123 Apr 13 19:25:30.000053 ntpd[2091]: 13 Apr 19:25:29 ntpd[2091]: Listen normally on 10 lxc_health [fe80::68a9:d9ff:fee1:f8ad%8]:123 Apr 13 19:25:30.000053 ntpd[2091]: 13 Apr 19:25:29 ntpd[2091]: Listen normally on 11 lxc64a249acb657 [fe80::c00a:efff:fea1:2145%10]:123 Apr 13 19:25:30.000053 ntpd[2091]: 13 Apr 19:25:29 ntpd[2091]: Listen normally on 12 lxc32c70decdbd1 [fe80::3824:afff:feda:f137%12]:123 Apr 13 19:25:29.997691 ntpd[2091]: Listen normally on 7 cilium_net [fe80::b48d:ecff:fedf:18af%4]:123 Apr 13 19:25:29.997796 ntpd[2091]: Listen normally on 8 cilium_host [fe80::5013:f5ff:fe73:c6ef%5]:123 Apr 13 19:25:29.997869 ntpd[2091]: Listen normally on 9 cilium_vxlan [fe80::78a4:1dff:fe4d:104b%6]:123 Apr 13 19:25:29.997939 ntpd[2091]: Listen normally on 10 lxc_health [fe80::68a9:d9ff:fee1:f8ad%8]:123 Apr 13 19:25:29.998008 ntpd[2091]: Listen normally on 11 lxc64a249acb657 [fe80::c00a:efff:fea1:2145%10]:123 Apr 13 19:25:29.998074 ntpd[2091]: Listen normally on 12 lxc32c70decdbd1 [fe80::3824:afff:feda:f137%12]:123 Apr 13 19:25:34.160291 containerd[2137]: time="2026-04-13T19:25:34.159305099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:34.160291 containerd[2137]: time="2026-04-13T19:25:34.159401783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:34.160291 containerd[2137]: time="2026-04-13T19:25:34.159438029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:34.160291 containerd[2137]: time="2026-04-13T19:25:34.159633723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:34.216655 systemd[1]: run-containerd-runc-k8s.io-44d1edc6ee4c3d478b8d36e824105b8fb0a87bf46215c26179b70ee7b1cfc1c7-runc.iRPL3z.mount: Deactivated successfully. Apr 13 19:25:34.265544 containerd[2137]: time="2026-04-13T19:25:34.259428469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:34.265544 containerd[2137]: time="2026-04-13T19:25:34.259542316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:34.265544 containerd[2137]: time="2026-04-13T19:25:34.259599048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:34.265544 containerd[2137]: time="2026-04-13T19:25:34.259948673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:34.378700 containerd[2137]: time="2026-04-13T19:25:34.378605675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8dhwv,Uid:bf27f760-66e2-489c-b16e-3efb5b6a747d,Namespace:kube-system,Attempt:0,} returns sandbox id \"44d1edc6ee4c3d478b8d36e824105b8fb0a87bf46215c26179b70ee7b1cfc1c7\"" Apr 13 19:25:34.399022 containerd[2137]: time="2026-04-13T19:25:34.398258336Z" level=info msg="CreateContainer within sandbox \"44d1edc6ee4c3d478b8d36e824105b8fb0a87bf46215c26179b70ee7b1cfc1c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:34.428288 containerd[2137]: time="2026-04-13T19:25:34.428121681Z" level=info msg="CreateContainer within sandbox \"44d1edc6ee4c3d478b8d36e824105b8fb0a87bf46215c26179b70ee7b1cfc1c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4392cc6d82ae32858ebf33d3a7d0bc5c73abeb264f39e9209fdf8637a2f641d3\"" Apr 13 19:25:34.434411 containerd[2137]: time="2026-04-13T19:25:34.433517314Z" level=info msg="StartContainer for \"4392cc6d82ae32858ebf33d3a7d0bc5c73abeb264f39e9209fdf8637a2f641d3\"" Apr 13 19:25:34.506185 containerd[2137]: time="2026-04-13T19:25:34.504481278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tmp8b,Uid:b24138e8-e862-4177-880d-00a2fc3c4a43,Namespace:kube-system,Attempt:0,} returns sandbox id \"01ba2c5e96e09bcfa89ca0693d62ab278e3eb9954154cc6336063499f9e7e8e4\"" Apr 13 19:25:34.532976 containerd[2137]: time="2026-04-13T19:25:34.532895400Z" level=info msg="CreateContainer within sandbox \"01ba2c5e96e09bcfa89ca0693d62ab278e3eb9954154cc6336063499f9e7e8e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:34.577829 containerd[2137]: time="2026-04-13T19:25:34.577772582Z" level=info msg="CreateContainer within sandbox \"01ba2c5e96e09bcfa89ca0693d62ab278e3eb9954154cc6336063499f9e7e8e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d4de87366552652e72c7bf80af92762c83009510a6ec69a32540f9aef663193\"" Apr 13 19:25:34.580363 containerd[2137]: time="2026-04-13T19:25:34.580304144Z" level=info msg="StartContainer for \"2d4de87366552652e72c7bf80af92762c83009510a6ec69a32540f9aef663193\"" Apr 13 19:25:34.632601 containerd[2137]: time="2026-04-13T19:25:34.632515057Z" level=info msg="StartContainer for \"4392cc6d82ae32858ebf33d3a7d0bc5c73abeb264f39e9209fdf8637a2f641d3\" returns successfully" Apr 13 19:25:34.743364 containerd[2137]: time="2026-04-13T19:25:34.743253249Z" level=info msg="StartContainer for \"2d4de87366552652e72c7bf80af92762c83009510a6ec69a32540f9aef663193\" returns successfully" Apr 13 19:25:35.723054 kubelet[3609]: I0413 19:25:35.722517 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tmp8b" podStartSLOduration=29.722488258 podStartE2EDuration="29.722488258s" podCreationTimestamp="2026-04-13 19:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:35.718693008 +0000 UTC m=+34.637219798" watchObservedRunningTime="2026-04-13 19:25:35.722488258 +0000 UTC m=+34.641015036" Apr 13 19:25:35.727276 kubelet[3609]: I0413 19:25:35.726393 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8dhwv" podStartSLOduration=29.725762068 podStartE2EDuration="29.725762068s" podCreationTimestamp="2026-04-13 19:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:34.736580411 +0000 UTC m=+33.655107189" watchObservedRunningTime="2026-04-13 19:25:35.725762068 +0000 UTC m=+34.644288918" Apr 13 19:25:50.396640 systemd[1]: Started sshd@7-172.31.17.121:22-4.175.71.9:54488.service - OpenSSH per-connection server daemon (4.175.71.9:54488). Apr 13 19:25:51.376847 sshd[4985]: Accepted publickey for core from 4.175.71.9 port 54488 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:51.379616 sshd[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:51.388415 systemd-logind[2102]: New session 8 of user core. Apr 13 19:25:51.397675 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:25:52.190522 sshd[4985]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:52.197921 systemd[1]: sshd@7-172.31.17.121:22-4.175.71.9:54488.service: Deactivated successfully. Apr 13 19:25:52.205822 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:25:52.208468 systemd-logind[2102]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:25:52.211009 systemd-logind[2102]: Removed session 8. Apr 13 19:25:57.360856 systemd[1]: Started sshd@8-172.31.17.121:22-4.175.71.9:51192.service - OpenSSH per-connection server daemon (4.175.71.9:51192). Apr 13 19:25:58.362737 sshd[5001]: Accepted publickey for core from 4.175.71.9 port 51192 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:58.365973 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:58.374725 systemd-logind[2102]: New session 9 of user core. Apr 13 19:25:58.388686 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:25:59.173498 sshd[5001]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:59.181800 systemd[1]: sshd@8-172.31.17.121:22-4.175.71.9:51192.service: Deactivated successfully. Apr 13 19:25:59.187120 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:25:59.188033 systemd-logind[2102]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:25:59.193292 systemd-logind[2102]: Removed session 9. Apr 13 19:26:04.341701 systemd[1]: Started sshd@9-172.31.17.121:22-4.175.71.9:51206.service - OpenSSH per-connection server daemon (4.175.71.9:51206). Apr 13 19:26:05.310974 sshd[5018]: Accepted publickey for core from 4.175.71.9 port 51206 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:05.314016 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:05.322920 systemd-logind[2102]: New session 10 of user core. Apr 13 19:26:05.329908 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:26:06.204505 sshd[5018]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:06.213960 systemd[1]: sshd@9-172.31.17.121:22-4.175.71.9:51206.service: Deactivated successfully. Apr 13 19:26:06.220649 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:26:06.222219 systemd-logind[2102]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:26:06.224301 systemd-logind[2102]: Removed session 10. Apr 13 19:26:11.365728 systemd[1]: Started sshd@10-172.31.17.121:22-4.175.71.9:53328.service - OpenSSH per-connection server daemon (4.175.71.9:53328). Apr 13 19:26:12.326211 sshd[5034]: Accepted publickey for core from 4.175.71.9 port 53328 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:12.328429 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:12.335765 systemd-logind[2102]: New session 11 of user core. Apr 13 19:26:12.344829 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:26:13.105402 sshd[5034]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:13.112724 systemd[1]: sshd@10-172.31.17.121:22-4.175.71.9:53328.service: Deactivated successfully. Apr 13 19:26:13.119588 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:26:13.121326 systemd-logind[2102]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:26:13.124601 systemd-logind[2102]: Removed session 11. Apr 13 19:26:13.280666 systemd[1]: Started sshd@11-172.31.17.121:22-4.175.71.9:53342.service - OpenSSH per-connection server daemon (4.175.71.9:53342). Apr 13 19:26:14.279750 sshd[5049]: Accepted publickey for core from 4.175.71.9 port 53342 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:14.281704 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:14.289769 systemd-logind[2102]: New session 12 of user core. Apr 13 19:26:14.298778 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:26:15.163097 sshd[5049]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:15.169974 systemd[1]: sshd@11-172.31.17.121:22-4.175.71.9:53342.service: Deactivated successfully. Apr 13 19:26:15.176547 systemd-logind[2102]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:26:15.177726 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:26:15.181853 systemd-logind[2102]: Removed session 12. Apr 13 19:26:15.339317 systemd[1]: Started sshd@12-172.31.17.121:22-4.175.71.9:53354.service - OpenSSH per-connection server daemon (4.175.71.9:53354). Apr 13 19:26:16.357563 sshd[5061]: Accepted publickey for core from 4.175.71.9 port 53354 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:16.360358 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:16.369013 systemd-logind[2102]: New session 13 of user core. Apr 13 19:26:16.374814 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:26:17.174507 sshd[5061]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:17.182318 systemd[1]: sshd@12-172.31.17.121:22-4.175.71.9:53354.service: Deactivated successfully. Apr 13 19:26:17.189006 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:26:17.190877 systemd-logind[2102]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:26:17.193195 systemd-logind[2102]: Removed session 13. Apr 13 19:26:22.341678 systemd[1]: Started sshd@13-172.31.17.121:22-4.175.71.9:60474.service - OpenSSH per-connection server daemon (4.175.71.9:60474). Apr 13 19:26:23.353236 sshd[5077]: Accepted publickey for core from 4.175.71.9 port 60474 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:23.355991 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:23.364544 systemd-logind[2102]: New session 14 of user core. Apr 13 19:26:23.369691 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:26:24.165874 sshd[5077]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:24.173595 systemd[1]: sshd@13-172.31.17.121:22-4.175.71.9:60474.service: Deactivated successfully. Apr 13 19:26:24.181555 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:26:24.183604 systemd-logind[2102]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:26:24.187218 systemd-logind[2102]: Removed session 14. Apr 13 19:26:29.321856 systemd[1]: Started sshd@14-172.31.17.121:22-4.175.71.9:46486.service - OpenSSH per-connection server daemon (4.175.71.9:46486). Apr 13 19:26:30.285223 sshd[5091]: Accepted publickey for core from 4.175.71.9 port 46486 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:30.288344 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:30.298659 systemd-logind[2102]: New session 15 of user core. Apr 13 19:26:30.307935 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:31.070083 sshd[5091]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:31.076792 systemd[1]: sshd@14-172.31.17.121:22-4.175.71.9:46486.service: Deactivated successfully. Apr 13 19:26:31.077149 systemd-logind[2102]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:31.085482 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:31.088642 systemd-logind[2102]: Removed session 15. Apr 13 19:26:31.249703 systemd[1]: Started sshd@15-172.31.17.121:22-4.175.71.9:46492.service - OpenSSH per-connection server daemon (4.175.71.9:46492). Apr 13 19:26:32.281753 sshd[5105]: Accepted publickey for core from 4.175.71.9 port 46492 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:32.284613 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:32.294409 systemd-logind[2102]: New session 16 of user core. Apr 13 19:26:32.299398 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:26:33.198578 sshd[5105]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:33.207451 systemd[1]: sshd@15-172.31.17.121:22-4.175.71.9:46492.service: Deactivated successfully. Apr 13 19:26:33.212815 systemd-logind[2102]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:26:33.213836 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:26:33.218526 systemd-logind[2102]: Removed session 16. Apr 13 19:26:33.374637 systemd[1]: Started sshd@16-172.31.17.121:22-4.175.71.9:46500.service - OpenSSH per-connection server daemon (4.175.71.9:46500). Apr 13 19:26:34.421734 sshd[5116]: Accepted publickey for core from 4.175.71.9 port 46500 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:34.424459 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:34.432891 systemd-logind[2102]: New session 17 of user core. Apr 13 19:26:34.440766 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:26:36.136058 sshd[5116]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:36.145351 systemd[1]: sshd@16-172.31.17.121:22-4.175.71.9:46500.service: Deactivated successfully. Apr 13 19:26:36.151135 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:26:36.153206 systemd-logind[2102]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:26:36.156819 systemd-logind[2102]: Removed session 17. Apr 13 19:26:36.314812 systemd[1]: Started sshd@17-172.31.17.121:22-4.175.71.9:44588.service - OpenSSH per-connection server daemon (4.175.71.9:44588). Apr 13 19:26:37.346205 sshd[5135]: Accepted publickey for core from 4.175.71.9 port 44588 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:37.348115 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:37.356037 systemd-logind[2102]: New session 18 of user core. Apr 13 19:26:37.363801 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:26:38.419443 sshd[5135]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:38.426819 systemd[1]: sshd@17-172.31.17.121:22-4.175.71.9:44588.service: Deactivated successfully. Apr 13 19:26:38.432457 systemd-logind[2102]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:26:38.433773 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:26:38.436844 systemd-logind[2102]: Removed session 18. Apr 13 19:26:38.580764 systemd[1]: Started sshd@18-172.31.17.121:22-4.175.71.9:44598.service - OpenSSH per-connection server daemon (4.175.71.9:44598). Apr 13 19:26:39.571033 sshd[5149]: Accepted publickey for core from 4.175.71.9 port 44598 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:39.573691 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:39.582735 systemd-logind[2102]: New session 19 of user core. Apr 13 19:26:39.594700 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:26:40.362121 sshd[5149]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:40.368437 systemd-logind[2102]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:26:40.369108 systemd[1]: sshd@18-172.31.17.121:22-4.175.71.9:44598.service: Deactivated successfully. Apr 13 19:26:40.377801 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:26:40.382011 systemd-logind[2102]: Removed session 19. Apr 13 19:26:45.538680 systemd[1]: Started sshd@19-172.31.17.121:22-4.175.71.9:58920.service - OpenSSH per-connection server daemon (4.175.71.9:58920). Apr 13 19:26:46.544201 sshd[5165]: Accepted publickey for core from 4.175.71.9 port 58920 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:46.546343 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:46.553720 systemd-logind[2102]: New session 20 of user core. Apr 13 19:26:46.569791 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:26:47.352189 sshd[5165]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:47.358384 systemd-logind[2102]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:26:47.360003 systemd[1]: sshd@19-172.31.17.121:22-4.175.71.9:58920.service: Deactivated successfully. Apr 13 19:26:47.369867 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:26:47.372021 systemd-logind[2102]: Removed session 20. Apr 13 19:26:52.512245 systemd[1]: Started sshd@20-172.31.17.121:22-4.175.71.9:58928.service - OpenSSH per-connection server daemon (4.175.71.9:58928). Apr 13 19:26:53.514208 sshd[5179]: Accepted publickey for core from 4.175.71.9 port 58928 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:53.516123 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:53.527466 systemd-logind[2102]: New session 21 of user core. Apr 13 19:26:53.537912 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:26:54.312066 sshd[5179]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:54.321127 systemd[1]: sshd@20-172.31.17.121:22-4.175.71.9:58928.service: Deactivated successfully. Apr 13 19:26:54.322004 systemd-logind[2102]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:26:54.330662 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:26:54.333658 systemd-logind[2102]: Removed session 21. Apr 13 19:26:54.491736 systemd[1]: Started sshd@21-172.31.17.121:22-4.175.71.9:58944.service - OpenSSH per-connection server daemon (4.175.71.9:58944). Apr 13 19:26:55.533940 sshd[5193]: Accepted publickey for core from 4.175.71.9 port 58944 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:55.536570 sshd[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:55.547816 systemd-logind[2102]: New session 22 of user core. Apr 13 19:26:55.559393 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:26:58.778088 containerd[2137]: time="2026-04-13T19:26:58.777988172Z" level=info msg="StopContainer for \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\" with timeout 30 (s)" Apr 13 19:26:58.782945 containerd[2137]: time="2026-04-13T19:26:58.781350558Z" level=info msg="Stop container \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\" with signal terminated" Apr 13 19:26:58.795556 systemd[1]: run-containerd-runc-k8s.io-1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76-runc.hv1Fnd.mount: Deactivated successfully. Apr 13 19:26:58.822031 containerd[2137]: time="2026-04-13T19:26:58.821959124Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:26:58.846785 containerd[2137]: time="2026-04-13T19:26:58.846719645Z" level=info msg="StopContainer for \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\" with timeout 2 (s)" Apr 13 19:26:58.847832 containerd[2137]: time="2026-04-13T19:26:58.847648101Z" level=info msg="Stop container \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\" with signal terminated" Apr 13 19:26:58.863370 systemd-networkd[1689]: lxc_health: Link DOWN Apr 13 19:26:58.863393 systemd-networkd[1689]: lxc_health: Lost carrier Apr 13 19:26:58.891999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa-rootfs.mount: Deactivated successfully. Apr 13 19:26:58.918624 containerd[2137]: time="2026-04-13T19:26:58.918490182Z" level=info msg="shim disconnected" id=c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa namespace=k8s.io Apr 13 19:26:58.919123 containerd[2137]: time="2026-04-13T19:26:58.919078188Z" level=warning msg="cleaning up after shim disconnected" id=c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa namespace=k8s.io Apr 13 19:26:58.919330 containerd[2137]: time="2026-04-13T19:26:58.919302728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:58.947508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76-rootfs.mount: Deactivated successfully. Apr 13 19:26:58.955888 containerd[2137]: time="2026-04-13T19:26:58.955326563Z" level=info msg="shim disconnected" id=1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76 namespace=k8s.io Apr 13 19:26:58.955888 containerd[2137]: time="2026-04-13T19:26:58.955882485Z" level=warning msg="cleaning up after shim disconnected" id=1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76 namespace=k8s.io Apr 13 19:26:58.956491 containerd[2137]: time="2026-04-13T19:26:58.955907145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:58.966594 containerd[2137]: time="2026-04-13T19:26:58.966530938Z" level=info msg="StopContainer for \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\" returns successfully" Apr 13 19:26:58.967481 containerd[2137]: time="2026-04-13T19:26:58.967432191Z" level=info msg="StopPodSandbox for \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\"" Apr 13 19:26:58.967621 containerd[2137]: time="2026-04-13T19:26:58.967499921Z" level=info msg="Container to stop \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:58.974060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c-shm.mount: Deactivated successfully. Apr 13 19:26:59.018145 containerd[2137]: time="2026-04-13T19:26:59.016088731Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:26:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:26:59.024427 containerd[2137]: time="2026-04-13T19:26:59.024352921Z" level=info msg="StopContainer for \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\" returns successfully" Apr 13 19:26:59.025422 containerd[2137]: time="2026-04-13T19:26:59.025361772Z" level=info msg="StopPodSandbox for \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\"" Apr 13 19:26:59.025560 containerd[2137]: time="2026-04-13T19:26:59.025432633Z" level=info msg="Container to stop \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:59.025560 containerd[2137]: time="2026-04-13T19:26:59.025461059Z" level=info msg="Container to stop \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:59.025560 containerd[2137]: time="2026-04-13T19:26:59.025484291Z" level=info msg="Container to stop \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:59.025560 containerd[2137]: time="2026-04-13T19:26:59.025509646Z" level=info msg="Container to stop \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:59.025560 containerd[2137]: time="2026-04-13T19:26:59.025535589Z" level=info msg="Container to stop \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:59.065842 containerd[2137]: time="2026-04-13T19:26:59.065120323Z" level=info msg="shim disconnected" id=824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c namespace=k8s.io Apr 13 19:26:59.065842 containerd[2137]: time="2026-04-13T19:26:59.065244485Z" level=warning msg="cleaning up after shim disconnected" id=824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c namespace=k8s.io Apr 13 19:26:59.065842 containerd[2137]: time="2026-04-13T19:26:59.065269277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:59.102203 containerd[2137]: time="2026-04-13T19:26:59.102013184Z" level=info msg="shim disconnected" id=84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25 namespace=k8s.io Apr 13 19:26:59.102462 containerd[2137]: time="2026-04-13T19:26:59.102294203Z" level=warning msg="cleaning up after shim disconnected" id=84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25 namespace=k8s.io Apr 13 19:26:59.102462 containerd[2137]: time="2026-04-13T19:26:59.102324896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:59.107486 containerd[2137]: time="2026-04-13T19:26:59.107391965Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:26:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:26:59.109726 containerd[2137]: time="2026-04-13T19:26:59.109487760Z" level=info msg="TearDown network for sandbox \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" successfully" Apr 13 19:26:59.109726 containerd[2137]: time="2026-04-13T19:26:59.109552756Z" level=info msg="StopPodSandbox for \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" returns successfully" Apr 13 19:26:59.143301 containerd[2137]: time="2026-04-13T19:26:59.143241611Z" level=info msg="TearDown network for sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" successfully" Apr 13 19:26:59.143301 containerd[2137]: time="2026-04-13T19:26:59.143296987Z" level=info msg="StopPodSandbox for \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" returns successfully" Apr 13 19:26:59.289913 kubelet[3609]: I0413 19:26:59.289849 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-clustermesh-secrets\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290590 kubelet[3609]: I0413 19:26:59.289920 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hubble-tls\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290590 kubelet[3609]: I0413 19:26:59.289962 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-run\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290590 kubelet[3609]: I0413 19:26:59.289997 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-lib-modules\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290590 kubelet[3609]: I0413 19:26:59.290035 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-net\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290590 kubelet[3609]: I0413 19:26:59.290097 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-kernel\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290590 kubelet[3609]: I0413 19:26:59.290138 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtnjc\" (UniqueName: \"kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-kube-api-access-gtnjc\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290931 kubelet[3609]: I0413 19:26:59.290298 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.290931 kubelet[3609]: I0413 19:26:59.290801 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f07f704c-94aa-4657-bd9e-7b7059b7ffad-cilium-config-path\") pod \"f07f704c-94aa-4657-bd9e-7b7059b7ffad\" (UID: \"f07f704c-94aa-4657-bd9e-7b7059b7ffad\") " Apr 13 19:26:59.290931 kubelet[3609]: I0413 19:26:59.290845 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-bpf-maps\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290931 kubelet[3609]: I0413 19:26:59.290884 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hostproc\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.290931 kubelet[3609]: I0413 19:26:59.290915 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cni-path\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.291228 kubelet[3609]: I0413 19:26:59.290951 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-cgroup\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.291228 kubelet[3609]: I0413 19:26:59.290986 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-xtables-lock\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.291228 kubelet[3609]: I0413 19:26:59.291018 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-etc-cni-netd\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.291228 kubelet[3609]: I0413 19:26:59.291061 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-config-path\") pod \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\" (UID: \"53f259e2-41c9-4fd1-9704-8e6e2fdebb37\") " Apr 13 19:26:59.291228 kubelet[3609]: I0413 19:26:59.291104 3609 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49zk5\" (UniqueName: \"kubernetes.io/projected/f07f704c-94aa-4657-bd9e-7b7059b7ffad-kube-api-access-49zk5\") pod \"f07f704c-94aa-4657-bd9e-7b7059b7ffad\" (UID: \"f07f704c-94aa-4657-bd9e-7b7059b7ffad\") " Apr 13 19:26:59.291228 kubelet[3609]: I0413 19:26:59.291203 3609 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-run\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.293218 kubelet[3609]: I0413 19:26:59.293102 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.293364 kubelet[3609]: I0413 19:26:59.293225 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.293364 kubelet[3609]: I0413 19:26:59.293270 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.299129 kubelet[3609]: I0413 19:26:59.299042 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:26:59.300886 kubelet[3609]: I0413 19:26:59.300804 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-kube-api-access-gtnjc" (OuterVolumeSpecName: "kube-api-access-gtnjc") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "kube-api-access-gtnjc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:59.304553 kubelet[3609]: I0413 19:26:59.304454 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:59.304699 kubelet[3609]: I0413 19:26:59.304666 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07f704c-94aa-4657-bd9e-7b7059b7ffad-kube-api-access-49zk5" (OuterVolumeSpecName: "kube-api-access-49zk5") pod "f07f704c-94aa-4657-bd9e-7b7059b7ffad" (UID: "f07f704c-94aa-4657-bd9e-7b7059b7ffad"). InnerVolumeSpecName "kube-api-access-49zk5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:59.304764 kubelet[3609]: I0413 19:26:59.304732 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.304825 kubelet[3609]: I0413 19:26:59.304772 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.304825 kubelet[3609]: I0413 19:26:59.304809 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hostproc" (OuterVolumeSpecName: "hostproc") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.304943 kubelet[3609]: I0413 19:26:59.304845 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cni-path" (OuterVolumeSpecName: "cni-path") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.304943 kubelet[3609]: I0413 19:26:59.304884 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.304943 kubelet[3609]: I0413 19:26:59.304919 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:59.307566 kubelet[3609]: I0413 19:26:59.307470 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07f704c-94aa-4657-bd9e-7b7059b7ffad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f07f704c-94aa-4657-bd9e-7b7059b7ffad" (UID: "f07f704c-94aa-4657-bd9e-7b7059b7ffad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:26:59.310990 kubelet[3609]: I0413 19:26:59.310925 3609 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53f259e2-41c9-4fd1-9704-8e6e2fdebb37" (UID: "53f259e2-41c9-4fd1-9704-8e6e2fdebb37"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:26:59.392513 kubelet[3609]: I0413 19:26:59.392344 3609 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-clustermesh-secrets\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392513 kubelet[3609]: I0413 19:26:59.392399 3609 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hubble-tls\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392513 kubelet[3609]: I0413 19:26:59.392438 3609 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-lib-modules\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392513 kubelet[3609]: I0413 19:26:59.392463 3609 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-net\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392513 kubelet[3609]: I0413 19:26:59.392487 3609 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-host-proc-sys-kernel\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392513 kubelet[3609]: I0413 19:26:59.392508 3609 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gtnjc\" (UniqueName: \"kubernetes.io/projected/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-kube-api-access-gtnjc\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392536 3609 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f07f704c-94aa-4657-bd9e-7b7059b7ffad-cilium-config-path\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392559 3609 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-bpf-maps\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392581 3609 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-hostproc\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392602 3609 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cni-path\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392622 3609 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-cgroup\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392642 3609 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-xtables-lock\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392663 3609 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-etc-cni-netd\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.392901 kubelet[3609]: I0413 19:26:59.392683 3609 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53f259e2-41c9-4fd1-9704-8e6e2fdebb37-cilium-config-path\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.393712 kubelet[3609]: I0413 19:26:59.392706 3609 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-49zk5\" (UniqueName: \"kubernetes.io/projected/f07f704c-94aa-4657-bd9e-7b7059b7ffad-kube-api-access-49zk5\") on node \"ip-172-31-17-121\" DevicePath \"\"" Apr 13 19:26:59.773351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c-rootfs.mount: Deactivated successfully. Apr 13 19:26:59.773882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25-rootfs.mount: Deactivated successfully. Apr 13 19:26:59.774251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25-shm.mount: Deactivated successfully. Apr 13 19:26:59.774601 systemd[1]: var-lib-kubelet-pods-f07f704c\x2d94aa\x2d4657\x2dbd9e\x2d7b7059b7ffad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d49zk5.mount: Deactivated successfully. Apr 13 19:26:59.774957 systemd[1]: var-lib-kubelet-pods-53f259e2\x2d41c9\x2d4fd1\x2d9704\x2d8e6e2fdebb37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgtnjc.mount: Deactivated successfully. Apr 13 19:26:59.775308 systemd[1]: var-lib-kubelet-pods-53f259e2\x2d41c9\x2d4fd1\x2d9704\x2d8e6e2fdebb37-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 19:26:59.775650 systemd[1]: var-lib-kubelet-pods-53f259e2\x2d41c9\x2d4fd1\x2d9704\x2d8e6e2fdebb37-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 19:26:59.912513 kubelet[3609]: I0413 19:26:59.912113 3609 scope.go:117] "RemoveContainer" containerID="c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa" Apr 13 19:26:59.917637 containerd[2137]: time="2026-04-13T19:26:59.917428658Z" level=info msg="RemoveContainer for \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\"" Apr 13 19:26:59.933974 containerd[2137]: time="2026-04-13T19:26:59.933112109Z" level=info msg="RemoveContainer for \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\" returns successfully" Apr 13 19:26:59.934842 kubelet[3609]: I0413 19:26:59.934724 3609 scope.go:117] "RemoveContainer" containerID="c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa" Apr 13 19:26:59.935347 containerd[2137]: time="2026-04-13T19:26:59.935193751Z" level=error msg="ContainerStatus for \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\": not found" Apr 13 19:26:59.935802 kubelet[3609]: E0413 19:26:59.935655 3609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\": not found" containerID="c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa" Apr 13 19:26:59.937273 kubelet[3609]: I0413 19:26:59.935737 3609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa"} err="failed to get container status \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8e1acfc83ff7add122fac2206782526c6c09be2a8a0e5ceda754e9dc12295aa\": not found" Apr 13 19:26:59.937651 kubelet[3609]: I0413 19:26:59.936089 3609 scope.go:117] "RemoveContainer" containerID="1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76" Apr 13 19:26:59.944765 containerd[2137]: time="2026-04-13T19:26:59.943584490Z" level=info msg="RemoveContainer for \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\"" Apr 13 19:26:59.950791 containerd[2137]: time="2026-04-13T19:26:59.950733034Z" level=info msg="RemoveContainer for \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\" returns successfully" Apr 13 19:26:59.951222 kubelet[3609]: I0413 19:26:59.951064 3609 scope.go:117] "RemoveContainer" containerID="5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a" Apr 13 19:26:59.954386 containerd[2137]: time="2026-04-13T19:26:59.953258263Z" level=info msg="RemoveContainer for \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\"" Apr 13 19:26:59.964532 containerd[2137]: time="2026-04-13T19:26:59.964479189Z" level=info msg="RemoveContainer for \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\" returns successfully" Apr 13 19:26:59.965955 kubelet[3609]: I0413 19:26:59.965833 3609 scope.go:117] "RemoveContainer" containerID="95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc" Apr 13 19:26:59.968626 containerd[2137]: time="2026-04-13T19:26:59.968444994Z" level=info msg="RemoveContainer for \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\"" Apr 13 19:26:59.975252 containerd[2137]: time="2026-04-13T19:26:59.975142995Z" level=info msg="RemoveContainer for \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\" returns successfully" Apr 13 19:26:59.976212 kubelet[3609]: I0413 19:26:59.975619 3609 scope.go:117] "RemoveContainer" containerID="acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2" Apr 13 19:26:59.977509 containerd[2137]: time="2026-04-13T19:26:59.977464506Z" level=info msg="RemoveContainer for \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\"" Apr 13 19:26:59.984028 containerd[2137]: time="2026-04-13T19:26:59.983868042Z" level=info msg="RemoveContainer for \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\" returns successfully" Apr 13 19:26:59.984465 kubelet[3609]: I0413 19:26:59.984227 3609 scope.go:117] "RemoveContainer" containerID="4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb" Apr 13 19:26:59.986876 containerd[2137]: time="2026-04-13T19:26:59.986484090Z" level=info msg="RemoveContainer for \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\"" Apr 13 19:26:59.994327 containerd[2137]: time="2026-04-13T19:26:59.994278079Z" level=info msg="RemoveContainer for \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\" returns successfully" Apr 13 19:26:59.995141 kubelet[3609]: I0413 19:26:59.995101 3609 scope.go:117] "RemoveContainer" containerID="1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76" Apr 13 19:26:59.995990 containerd[2137]: time="2026-04-13T19:26:59.995749155Z" level=error msg="ContainerStatus for \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\": not found" Apr 13 19:26:59.998246 kubelet[3609]: E0413 19:26:59.997538 3609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\": not found" containerID="1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76" Apr 13 19:26:59.998246 kubelet[3609]: I0413 19:26:59.997600 3609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76"} err="failed to get container status \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a1fd094cef0df29ddb9e4230ed13e61dcf291d90675d094a49997ad65ea4f76\": not found" Apr 13 19:26:59.998246 kubelet[3609]: I0413 19:26:59.997636 3609 scope.go:117] "RemoveContainer" containerID="5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a" Apr 13 19:26:59.999255 containerd[2137]: time="2026-04-13T19:26:59.999096440Z" level=error msg="ContainerStatus for \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\": not found" Apr 13 19:26:59.999851 kubelet[3609]: E0413 19:26:59.999664 3609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\": not found" containerID="5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a" Apr 13 19:26:59.999851 kubelet[3609]: I0413 19:26:59.999719 3609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a"} err="failed to get container status \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ec805881eb2b3b3d264fc855cbe0ef938c167c614a9bdb1a7bc39efae58ad0a\": not found" Apr 13 19:26:59.999851 kubelet[3609]: I0413 19:26:59.999753 3609 scope.go:117] "RemoveContainer" containerID="95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc" Apr 13 19:27:00.000636 containerd[2137]: time="2026-04-13T19:27:00.000554851Z" level=error msg="ContainerStatus for \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\": not found" Apr 13 19:27:00.001427 kubelet[3609]: E0413 19:27:00.000881 3609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\": not found" containerID="95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc" Apr 13 19:27:00.001427 kubelet[3609]: I0413 19:27:00.000930 3609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc"} err="failed to get container status \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"95a6204e9fa4839e5d2d4779e729f58cc5d0d4ff63cf14b18be66460359466dc\": not found" Apr 13 19:27:00.001427 kubelet[3609]: I0413 19:27:00.000967 3609 scope.go:117] "RemoveContainer" containerID="acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2" Apr 13 19:27:00.001969 containerd[2137]: time="2026-04-13T19:27:00.001796554Z" level=error msg="ContainerStatus for \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\": not found" Apr 13 19:27:00.002447 kubelet[3609]: E0413 19:27:00.002231 3609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\": not found" containerID="acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2" Apr 13 19:27:00.002447 kubelet[3609]: I0413 19:27:00.002276 3609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2"} err="failed to get container status \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\": rpc error: code = NotFound desc = an error occurred when try to find container \"acf2ef2621f3ff8a571befb386311a0be119872b4ec4801d80aa658f27cafed2\": not found" Apr 13 19:27:00.002447 kubelet[3609]: I0413 19:27:00.002308 3609 scope.go:117] "RemoveContainer" containerID="4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb" Apr 13 19:27:00.002711 containerd[2137]: time="2026-04-13T19:27:00.002643246Z" level=error msg="ContainerStatus for \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\": not found" Apr 13 19:27:00.003185 kubelet[3609]: E0413 19:27:00.003126 3609 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\": not found" containerID="4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb" Apr 13 19:27:00.003372 kubelet[3609]: I0413 19:27:00.003208 3609 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb"} err="failed to get container status \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"4810fec1c42f6ddee5b7367190d2719dfc2fa95ed27d8ae74c38465db0c5a5bb\": not found" Apr 13 19:27:00.838259 sshd[5193]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:00.843719 systemd[1]: sshd@21-172.31.17.121:22-4.175.71.9:58944.service: Deactivated successfully. Apr 13 19:27:00.852137 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:27:00.852730 systemd-logind[2102]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:27:00.856203 systemd-logind[2102]: Removed session 22. Apr 13 19:27:00.994977 ntpd[2091]: Deleting interface #10 lxc_health, fe80::68a9:d9ff:fee1:f8ad%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Apr 13 19:27:00.996625 ntpd[2091]: 13 Apr 19:27:00 ntpd[2091]: Deleting interface #10 lxc_health, fe80::68a9:d9ff:fee1:f8ad%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs Apr 13 19:27:00.996701 systemd[1]: Started sshd@22-172.31.17.121:22-4.175.71.9:52620.service - OpenSSH per-connection server daemon (4.175.71.9:52620). Apr 13 19:27:01.420063 kubelet[3609]: I0413 19:27:01.419985 3609 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f259e2-41c9-4fd1-9704-8e6e2fdebb37" path="/var/lib/kubelet/pods/53f259e2-41c9-4fd1-9704-8e6e2fdebb37/volumes" Apr 13 19:27:01.421488 containerd[2137]: time="2026-04-13T19:27:01.421416713Z" level=info msg="StopPodSandbox for \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\"" Apr 13 19:27:01.422022 containerd[2137]: time="2026-04-13T19:27:01.421556910Z" level=info msg="TearDown network for sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" successfully" Apr 13 19:27:01.422022 containerd[2137]: time="2026-04-13T19:27:01.421583537Z" level=info msg="StopPodSandbox for \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" returns successfully" Apr 13 19:27:01.422176 kubelet[3609]: I0413 19:27:01.422016 3609 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07f704c-94aa-4657-bd9e-7b7059b7ffad" path="/var/lib/kubelet/pods/f07f704c-94aa-4657-bd9e-7b7059b7ffad/volumes" Apr 13 19:27:01.423028 containerd[2137]: time="2026-04-13T19:27:01.422971087Z" level=info msg="RemovePodSandbox for \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\"" Apr 13 19:27:01.423219 containerd[2137]: time="2026-04-13T19:27:01.423031105Z" level=info msg="Forcibly stopping sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\"" Apr 13 19:27:01.423219 containerd[2137]: time="2026-04-13T19:27:01.423132526Z" level=info msg="TearDown network for sandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" successfully" Apr 13 19:27:01.429743 containerd[2137]: time="2026-04-13T19:27:01.429676093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:27:01.429916 containerd[2137]: time="2026-04-13T19:27:01.429772788Z" level=info msg="RemovePodSandbox \"84a814b36b987aea1a82962663b053189d4e944ed9a665ca5d523a2ab7e12e25\" returns successfully" Apr 13 19:27:01.431248 containerd[2137]: time="2026-04-13T19:27:01.430613496Z" level=info msg="StopPodSandbox for \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\"" Apr 13 19:27:01.431248 containerd[2137]: time="2026-04-13T19:27:01.430738593Z" level=info msg="TearDown network for sandbox \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" successfully" Apr 13 19:27:01.431248 containerd[2137]: time="2026-04-13T19:27:01.430779505Z" level=info msg="StopPodSandbox for \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" returns successfully" Apr 13 19:27:01.432168 containerd[2137]: time="2026-04-13T19:27:01.431881034Z" level=info msg="RemovePodSandbox for \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\"" Apr 13 19:27:01.432168 containerd[2137]: time="2026-04-13T19:27:01.432088878Z" level=info msg="Forcibly stopping sandbox \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\"" Apr 13 19:27:01.432530 containerd[2137]: time="2026-04-13T19:27:01.432277495Z" level=info msg="TearDown network for sandbox \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" successfully" Apr 13 19:27:01.439336 containerd[2137]: time="2026-04-13T19:27:01.439097392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:27:01.439336 containerd[2137]: time="2026-04-13T19:27:01.439203911Z" level=info msg="RemovePodSandbox \"824339baeb1e5029dfd6f1765e1059a2bf0b63f020e1f8f8cfc4c5b2e0133e5c\" returns successfully" Apr 13 19:27:01.602128 kubelet[3609]: E0413 19:27:01.602074 3609 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:27:01.967990 sshd[5359]: Accepted publickey for core from 4.175.71.9 port 52620 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:01.970740 sshd[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:01.982436 systemd-logind[2102]: New session 23 of user core. Apr 13 19:27:01.991665 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 19:27:03.556573 kubelet[3609]: I0413 19:27:03.556431 3609 setters.go:618] "Node became not ready" node="ip-172-31-17-121" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-13T19:27:03Z","lastTransitionTime":"2026-04-13T19:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 13 19:27:04.325302 kubelet[3609]: I0413 19:27:04.324504 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-lib-modules\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325302 kubelet[3609]: I0413 19:27:04.324572 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-cni-path\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325302 kubelet[3609]: I0413 19:27:04.324612 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad562992-2914-4be6-8c47-f34efca9c43e-clustermesh-secrets\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325302 kubelet[3609]: I0413 19:27:04.324654 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-cilium-run\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325302 kubelet[3609]: I0413 19:27:04.324691 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad562992-2914-4be6-8c47-f34efca9c43e-hubble-tls\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325302 kubelet[3609]: I0413 19:27:04.324725 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hgz\" (UniqueName: \"kubernetes.io/projected/ad562992-2914-4be6-8c47-f34efca9c43e-kube-api-access-98hgz\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325727 kubelet[3609]: I0413 19:27:04.324765 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-bpf-maps\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325727 kubelet[3609]: I0413 19:27:04.324817 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ad562992-2914-4be6-8c47-f34efca9c43e-cilium-ipsec-secrets\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325727 kubelet[3609]: I0413 19:27:04.324853 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-host-proc-sys-net\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325727 kubelet[3609]: I0413 19:27:04.324895 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-host-proc-sys-kernel\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325727 kubelet[3609]: I0413 19:27:04.324936 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-cilium-cgroup\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.325727 kubelet[3609]: I0413 19:27:04.324975 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-etc-cni-netd\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.326077 kubelet[3609]: I0413 19:27:04.325015 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-hostproc\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.326077 kubelet[3609]: I0413 19:27:04.325049 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad562992-2914-4be6-8c47-f34efca9c43e-xtables-lock\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.326077 kubelet[3609]: I0413 19:27:04.325088 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad562992-2914-4be6-8c47-f34efca9c43e-cilium-config-path\") pod \"cilium-tb4sh\" (UID: \"ad562992-2914-4be6-8c47-f34efca9c43e\") " pod="kube-system/cilium-tb4sh" Apr 13 19:27:04.345511 sshd[5359]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:04.353942 systemd[1]: sshd@22-172.31.17.121:22-4.175.71.9:52620.service: Deactivated successfully. Apr 13 19:27:04.360769 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 19:27:04.364015 systemd-logind[2102]: Session 23 logged out. Waiting for processes to exit. Apr 13 19:27:04.366439 systemd-logind[2102]: Removed session 23. Apr 13 19:27:04.523741 systemd[1]: Started sshd@23-172.31.17.121:22-4.175.71.9:52622.service - OpenSSH per-connection server daemon (4.175.71.9:52622). Apr 13 19:27:04.531073 containerd[2137]: time="2026-04-13T19:27:04.530834751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tb4sh,Uid:ad562992-2914-4be6-8c47-f34efca9c43e,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:04.583096 containerd[2137]: time="2026-04-13T19:27:04.581670768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:04.583096 containerd[2137]: time="2026-04-13T19:27:04.581782289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:04.583096 containerd[2137]: time="2026-04-13T19:27:04.581854625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:04.583786 containerd[2137]: time="2026-04-13T19:27:04.582039488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:04.654976 containerd[2137]: time="2026-04-13T19:27:04.654919434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tb4sh,Uid:ad562992-2914-4be6-8c47-f34efca9c43e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\"" Apr 13 19:27:04.666323 containerd[2137]: time="2026-04-13T19:27:04.666249866Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:27:04.687314 containerd[2137]: time="2026-04-13T19:27:04.687226029Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b863863a917c6e26eb541a95c4d941fab53b29d9fef8f70aa5e458624d90be11\"" Apr 13 19:27:04.689244 containerd[2137]: time="2026-04-13T19:27:04.688923096Z" level=info msg="StartContainer for \"b863863a917c6e26eb541a95c4d941fab53b29d9fef8f70aa5e458624d90be11\"" Apr 13 19:27:04.791235 containerd[2137]: time="2026-04-13T19:27:04.789763443Z" level=info msg="StartContainer for \"b863863a917c6e26eb541a95c4d941fab53b29d9fef8f70aa5e458624d90be11\" returns successfully" Apr 13 19:27:04.863307 containerd[2137]: time="2026-04-13T19:27:04.862981188Z" level=info msg="shim disconnected" id=b863863a917c6e26eb541a95c4d941fab53b29d9fef8f70aa5e458624d90be11 namespace=k8s.io Apr 13 19:27:04.863307 containerd[2137]: time="2026-04-13T19:27:04.863062759Z" level=warning msg="cleaning up after shim disconnected" id=b863863a917c6e26eb541a95c4d941fab53b29d9fef8f70aa5e458624d90be11 namespace=k8s.io Apr 13 19:27:04.863307 containerd[2137]: time="2026-04-13T19:27:04.863085811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:04.958036 containerd[2137]: time="2026-04-13T19:27:04.957898586Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:27:04.983922 containerd[2137]: time="2026-04-13T19:27:04.983746447Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7017681043c46a357cddb98e3c446338fd05a15757777f9096724e7cf839476\"" Apr 13 19:27:04.987269 containerd[2137]: time="2026-04-13T19:27:04.986091610Z" level=info msg="StartContainer for \"f7017681043c46a357cddb98e3c446338fd05a15757777f9096724e7cf839476\"" Apr 13 19:27:05.096915 containerd[2137]: time="2026-04-13T19:27:05.096817268Z" level=info msg="StartContainer for \"f7017681043c46a357cddb98e3c446338fd05a15757777f9096724e7cf839476\" returns successfully" Apr 13 19:27:05.149272 containerd[2137]: time="2026-04-13T19:27:05.149091450Z" level=info msg="shim disconnected" id=f7017681043c46a357cddb98e3c446338fd05a15757777f9096724e7cf839476 namespace=k8s.io Apr 13 19:27:05.149272 containerd[2137]: time="2026-04-13T19:27:05.149179978Z" level=warning msg="cleaning up after shim disconnected" id=f7017681043c46a357cddb98e3c446338fd05a15757777f9096724e7cf839476 namespace=k8s.io Apr 13 19:27:05.149272 containerd[2137]: time="2026-04-13T19:27:05.149202622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:05.546911 sshd[5380]: Accepted publickey for core from 4.175.71.9 port 52622 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:05.550424 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:05.558099 systemd-logind[2102]: New session 24 of user core. Apr 13 19:27:05.569697 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 19:27:05.960720 containerd[2137]: time="2026-04-13T19:27:05.960529031Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:27:05.993589 containerd[2137]: time="2026-04-13T19:27:05.993354378Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc2e5ee65abab5db4fa8d3c28c11a5bfe9591c3a19992ab80f596b6f9bb4cdde\"" Apr 13 19:27:05.999929 containerd[2137]: time="2026-04-13T19:27:05.999706232Z" level=info msg="StartContainer for \"bc2e5ee65abab5db4fa8d3c28c11a5bfe9591c3a19992ab80f596b6f9bb4cdde\"" Apr 13 19:27:06.170324 containerd[2137]: time="2026-04-13T19:27:06.170261704Z" level=info msg="StartContainer for \"bc2e5ee65abab5db4fa8d3c28c11a5bfe9591c3a19992ab80f596b6f9bb4cdde\" returns successfully" Apr 13 19:27:06.223311 containerd[2137]: time="2026-04-13T19:27:06.223066213Z" level=info msg="shim disconnected" id=bc2e5ee65abab5db4fa8d3c28c11a5bfe9591c3a19992ab80f596b6f9bb4cdde namespace=k8s.io Apr 13 19:27:06.223311 containerd[2137]: time="2026-04-13T19:27:06.223141296Z" level=warning msg="cleaning up after shim disconnected" id=bc2e5ee65abab5db4fa8d3c28c11a5bfe9591c3a19992ab80f596b6f9bb4cdde namespace=k8s.io Apr 13 19:27:06.223311 containerd[2137]: time="2026-04-13T19:27:06.223198615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:06.253540 sshd[5380]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:06.260673 systemd[1]: sshd@23-172.31.17.121:22-4.175.71.9:52622.service: Deactivated successfully. Apr 13 19:27:06.267545 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 19:27:06.270734 systemd-logind[2102]: Session 24 logged out. Waiting for processes to exit. Apr 13 19:27:06.272734 systemd-logind[2102]: Removed session 24. Apr 13 19:27:06.407655 systemd[1]: Started sshd@24-172.31.17.121:22-4.175.71.9:40736.service - OpenSSH per-connection server daemon (4.175.71.9:40736). Apr 13 19:27:06.440525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc2e5ee65abab5db4fa8d3c28c11a5bfe9591c3a19992ab80f596b6f9bb4cdde-rootfs.mount: Deactivated successfully. Apr 13 19:27:06.603636 kubelet[3609]: E0413 19:27:06.603521 3609 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:27:06.967224 containerd[2137]: time="2026-04-13T19:27:06.966910455Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:27:07.026182 containerd[2137]: time="2026-04-13T19:27:07.022609416Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc72d4b60bb9f6a8d9ddedc304bc957eaaf14b51826ba9ecc0522127b8b74d04\"" Apr 13 19:27:07.029309 containerd[2137]: time="2026-04-13T19:27:07.027486812Z" level=info msg="StartContainer for \"dc72d4b60bb9f6a8d9ddedc304bc957eaaf14b51826ba9ecc0522127b8b74d04\"" Apr 13 19:27:07.036504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1734165785.mount: Deactivated successfully. Apr 13 19:27:07.301194 containerd[2137]: time="2026-04-13T19:27:07.300746619Z" level=info msg="StartContainer for \"dc72d4b60bb9f6a8d9ddedc304bc957eaaf14b51826ba9ecc0522127b8b74d04\" returns successfully" Apr 13 19:27:07.344618 containerd[2137]: time="2026-04-13T19:27:07.344476419Z" level=info msg="shim disconnected" id=dc72d4b60bb9f6a8d9ddedc304bc957eaaf14b51826ba9ecc0522127b8b74d04 namespace=k8s.io Apr 13 19:27:07.344618 containerd[2137]: time="2026-04-13T19:27:07.344557954Z" level=warning msg="cleaning up after shim disconnected" id=dc72d4b60bb9f6a8d9ddedc304bc957eaaf14b51826ba9ecc0522127b8b74d04 namespace=k8s.io Apr 13 19:27:07.344618 containerd[2137]: time="2026-04-13T19:27:07.344580095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:07.396215 sshd[5610]: Accepted publickey for core from 4.175.71.9 port 40736 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:07.398259 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:07.407108 systemd-logind[2102]: New session 25 of user core. Apr 13 19:27:07.418398 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 19:27:07.440266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc72d4b60bb9f6a8d9ddedc304bc957eaaf14b51826ba9ecc0522127b8b74d04-rootfs.mount: Deactivated successfully. Apr 13 19:27:07.994510 containerd[2137]: time="2026-04-13T19:27:07.990746532Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:27:08.048865 containerd[2137]: time="2026-04-13T19:27:08.048793474Z" level=info msg="CreateContainer within sandbox \"b4c0024351718d0dbe9e76af593881a5cd5a4f7c7f1e0348699fda5bc13ef313\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d663fdf4e36631fd80a1bccbb2a3255e31b92acf0c2b1dbcbd753bdee909e56\"" Apr 13 19:27:08.051553 containerd[2137]: time="2026-04-13T19:27:08.051478307Z" level=info msg="StartContainer for \"4d663fdf4e36631fd80a1bccbb2a3255e31b92acf0c2b1dbcbd753bdee909e56\"" Apr 13 19:27:08.197953 containerd[2137]: time="2026-04-13T19:27:08.197882241Z" level=info msg="StartContainer for \"4d663fdf4e36631fd80a1bccbb2a3255e31b92acf0c2b1dbcbd753bdee909e56\" returns successfully" Apr 13 19:27:08.960335 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 13 19:27:10.277794 systemd[1]: run-containerd-runc-k8s.io-4d663fdf4e36631fd80a1bccbb2a3255e31b92acf0c2b1dbcbd753bdee909e56-runc.NOf3a9.mount: Deactivated successfully. Apr 13 19:27:13.450083 systemd-networkd[1689]: lxc_health: Link UP Apr 13 19:27:13.469309 systemd-networkd[1689]: lxc_health: Gained carrier Apr 13 19:27:13.472983 (udev-worker)[6236]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:27:14.583648 kubelet[3609]: I0413 19:27:14.580813 3609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tb4sh" podStartSLOduration=10.580711585 podStartE2EDuration="10.580711585s" podCreationTimestamp="2026-04-13 19:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:09.023600898 +0000 UTC m=+127.942127676" watchObservedRunningTime="2026-04-13 19:27:14.580711585 +0000 UTC m=+133.499238363" Apr 13 19:27:15.084362 systemd-networkd[1689]: lxc_health: Gained IPv6LL Apr 13 19:27:17.996379 ntpd[2091]: Listen normally on 13 lxc_health [fe80::451:69ff:feb0:1e8e%14]:123 Apr 13 19:27:17.996954 ntpd[2091]: 13 Apr 19:27:17 ntpd[2091]: Listen normally on 13 lxc_health [fe80::451:69ff:feb0:1e8e%14]:123 Apr 13 19:27:19.473687 systemd[1]: run-containerd-runc-k8s.io-4d663fdf4e36631fd80a1bccbb2a3255e31b92acf0c2b1dbcbd753bdee909e56-runc.AJ9rf5.mount: Deactivated successfully. Apr 13 19:27:19.735465 sshd[5610]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:19.744462 systemd[1]: sshd@24-172.31.17.121:22-4.175.71.9:40736.service: Deactivated successfully. Apr 13 19:27:19.755069 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 19:27:19.757567 systemd-logind[2102]: Session 25 logged out. Waiting for processes to exit. Apr 13 19:27:19.762933 systemd-logind[2102]: Removed session 25. Apr 13 19:27:33.820244 kubelet[3609]: E0413 19:27:33.818527 3609 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-121?timeout=10s\": context deadline exceeded" Apr 13 19:27:35.343611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8ec3cccbf70982c77828a6a6a7b7919e9030441ed00e1d10c7eeca3c0794db8-rootfs.mount: Deactivated successfully. Apr 13 19:27:35.384172 containerd[2137]: time="2026-04-13T19:27:35.383838920Z" level=info msg="shim disconnected" id=e8ec3cccbf70982c77828a6a6a7b7919e9030441ed00e1d10c7eeca3c0794db8 namespace=k8s.io Apr 13 19:27:35.384172 containerd[2137]: time="2026-04-13T19:27:35.383910620Z" level=warning msg="cleaning up after shim disconnected" id=e8ec3cccbf70982c77828a6a6a7b7919e9030441ed00e1d10c7eeca3c0794db8 namespace=k8s.io Apr 13 19:27:35.384172 containerd[2137]: time="2026-04-13T19:27:35.383930338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:36.062333 kubelet[3609]: I0413 19:27:36.062286 3609 scope.go:117] "RemoveContainer" containerID="e8ec3cccbf70982c77828a6a6a7b7919e9030441ed00e1d10c7eeca3c0794db8" Apr 13 19:27:36.067213 containerd[2137]: time="2026-04-13T19:27:36.066916688Z" level=info msg="CreateContainer within sandbox \"5523737d95dfe404f93e8048db53832de0d8b9e524f486dc05f21ec9be465a73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:27:36.099080 containerd[2137]: time="2026-04-13T19:27:36.098884920Z" level=info msg="CreateContainer within sandbox \"5523737d95dfe404f93e8048db53832de0d8b9e524f486dc05f21ec9be465a73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"87da118e1be91c4b1d1c90c3028894e2090b93baac5427fcaf67cd5610a5b727\"" Apr 13 19:27:36.099928 containerd[2137]: time="2026-04-13T19:27:36.099865333Z" level=info msg="StartContainer for \"87da118e1be91c4b1d1c90c3028894e2090b93baac5427fcaf67cd5610a5b727\"" Apr 13 19:27:36.220476 containerd[2137]: time="2026-04-13T19:27:36.220392536Z" level=info msg="StartContainer for \"87da118e1be91c4b1d1c90c3028894e2090b93baac5427fcaf67cd5610a5b727\" returns successfully" Apr 13 19:27:39.130075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29527e7f7552a7f3dc0b521e8cb6762b33160baff36a591b19634ff9650dfaa2-rootfs.mount: Deactivated successfully. Apr 13 19:27:39.141653 containerd[2137]: time="2026-04-13T19:27:39.141212360Z" level=info msg="shim disconnected" id=29527e7f7552a7f3dc0b521e8cb6762b33160baff36a591b19634ff9650dfaa2 namespace=k8s.io Apr 13 19:27:39.141653 containerd[2137]: time="2026-04-13T19:27:39.141285644Z" level=warning msg="cleaning up after shim disconnected" id=29527e7f7552a7f3dc0b521e8cb6762b33160baff36a591b19634ff9650dfaa2 namespace=k8s.io Apr 13 19:27:39.141653 containerd[2137]: time="2026-04-13T19:27:39.141305410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:40.080999 kubelet[3609]: I0413 19:27:40.080936 3609 scope.go:117] "RemoveContainer" containerID="29527e7f7552a7f3dc0b521e8cb6762b33160baff36a591b19634ff9650dfaa2" Apr 13 19:27:40.083740 containerd[2137]: time="2026-04-13T19:27:40.083682556Z" level=info msg="CreateContainer within sandbox \"986c55709bae36afe794d4c44f4c6dc0c59d0664fd27e899510dc45e386b2f38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 19:27:40.109676 containerd[2137]: time="2026-04-13T19:27:40.109567455Z" level=info msg="CreateContainer within sandbox \"986c55709bae36afe794d4c44f4c6dc0c59d0664fd27e899510dc45e386b2f38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"264bb54101542ebbe77f983b34df1d4a3dcbefac9d9cc6443ceae139c96ffe9a\"" Apr 13 19:27:40.111418 containerd[2137]: time="2026-04-13T19:27:40.110940385Z" level=info msg="StartContainer for \"264bb54101542ebbe77f983b34df1d4a3dcbefac9d9cc6443ceae139c96ffe9a\"" Apr 13 19:27:40.249899 containerd[2137]: time="2026-04-13T19:27:40.249617496Z" level=info msg="StartContainer for \"264bb54101542ebbe77f983b34df1d4a3dcbefac9d9cc6443ceae139c96ffe9a\" returns successfully" Apr 13 19:27:43.819409 kubelet[3609]: E0413 19:27:43.818811 3609 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-121?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"