Mar 14 00:12:03.279173 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 14 00:12:03.279227 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:12:03.279256 kernel: KASLR disabled due to lack of seed Mar 14 00:12:03.279273 kernel: efi: EFI v2.7 by EDK II Mar 14 00:12:03.279289 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 14 00:12:03.279305 kernel: ACPI: Early table checksum verification disabled Mar 14 00:12:03.279324 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 14 00:12:03.279340 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:12:03.279357 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:12:03.279373 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:12:03.279395 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:12:03.279412 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 14 00:12:03.279428 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 14 00:12:03.279445 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 14 00:12:03.279464 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:12:03.279485 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 14 00:12:03.279503 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 14 00:12:03.279520 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 14 00:12:03.279538 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 14 00:12:03.279555 kernel: printk: bootconsole [uart0] enabled Mar 14 00:12:03.279571 kernel: NUMA: Failed to initialise from firmware Mar 14 00:12:03.279589 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:12:03.279606 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 14 00:12:03.279623 kernel: Zone ranges: Mar 14 00:12:03.279641 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:12:03.279658 kernel: DMA32 empty Mar 14 00:12:03.279679 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 14 00:12:03.279696 kernel: Movable zone start for each node Mar 14 00:12:03.279714 kernel: Early memory node ranges Mar 14 00:12:03.279730 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 14 00:12:03.279747 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 14 00:12:03.279764 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 14 00:12:03.279781 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 14 00:12:03.279820 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 14 00:12:03.279839 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 14 00:12:03.279857 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 14 00:12:03.279874 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 14 00:12:03.279892 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:12:03.279914 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 14 00:12:03.279932 kernel: psci: probing for conduit method from ACPI. Mar 14 00:12:03.279957 kernel: psci: PSCIv1.0 detected in firmware. Mar 14 00:12:03.279975 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:12:03.279994 kernel: psci: Trusted OS migration not required Mar 14 00:12:03.280049 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:12:03.280071 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 14 00:12:03.280090 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:12:03.280108 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:12:03.280127 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:12:03.280147 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:12:03.280165 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:12:03.280184 kernel: CPU features: detected: Spectre-v2 Mar 14 00:12:03.280202 kernel: CPU features: detected: Spectre-v3a Mar 14 00:12:03.280220 kernel: CPU features: detected: Spectre-BHB Mar 14 00:12:03.280238 kernel: CPU features: detected: ARM erratum 1742098 Mar 14 00:12:03.280262 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 14 00:12:03.280281 kernel: alternatives: applying boot alternatives Mar 14 00:12:03.280303 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:12:03.280321 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:12:03.280340 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:12:03.280359 kernel: Fallback order for Node 0: 0 Mar 14 00:12:03.280378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 14 00:12:03.280398 kernel: Policy zone: Normal Mar 14 00:12:03.280416 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:12:03.280436 kernel: software IO TLB: area num 2. Mar 14 00:12:03.280456 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 14 00:12:03.280483 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 14 00:12:03.280502 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:12:03.280522 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:12:03.280543 kernel: rcu: RCU event tracing is enabled. Mar 14 00:12:03.280564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:12:03.280583 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:12:03.280603 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:12:03.280623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:12:03.280642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:12:03.280661 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:12:03.280680 kernel: GICv3: 96 SPIs implemented Mar 14 00:12:03.280705 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:12:03.280725 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:12:03.280745 kernel: GICv3: GICv3 features: 16 PPIs Mar 14 00:12:03.280765 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 14 00:12:03.280783 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 14 00:12:03.280802 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:12:03.280823 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:12:03.280843 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 14 00:12:03.280861 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 14 00:12:03.280880 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 14 00:12:03.280900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:12:03.280923 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 14 00:12:03.280947 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 14 00:12:03.280967 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 14 00:12:03.280986 kernel: Console: colour dummy device 80x25 Mar 14 00:12:03.281057 kernel: printk: console [tty1] enabled Mar 14 00:12:03.282112 kernel: ACPI: Core revision 20230628 Mar 14 00:12:03.282150 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 14 00:12:03.282170 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:12:03.282189 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:12:03.282207 kernel: landlock: Up and running. Mar 14 00:12:03.282234 kernel: SELinux: Initializing. Mar 14 00:12:03.282253 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:03.282271 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:12:03.282291 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:03.282310 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:12:03.282328 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:12:03.282349 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:12:03.282367 kernel: Platform MSI: ITS@0x10080000 domain created Mar 14 00:12:03.282385 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 14 00:12:03.282408 kernel: Remapping and enabling EFI services. Mar 14 00:12:03.282426 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:12:03.282444 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:12:03.282462 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 14 00:12:03.282481 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 14 00:12:03.282499 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 14 00:12:03.282517 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:12:03.282535 kernel: SMP: Total of 2 processors activated. Mar 14 00:12:03.282553 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:12:03.282575 kernel: CPU features: detected: 32-bit EL1 Support Mar 14 00:12:03.282594 kernel: CPU features: detected: CRC32 instructions Mar 14 00:12:03.282612 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:12:03.282642 kernel: alternatives: applying system-wide alternatives Mar 14 00:12:03.282665 kernel: devtmpfs: initialized Mar 14 00:12:03.282685 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:12:03.282704 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:12:03.282723 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:12:03.282742 kernel: SMBIOS 3.0.0 present. Mar 14 00:12:03.282765 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 14 00:12:03.282784 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:12:03.282803 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:12:03.282822 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:12:03.282841 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:12:03.282860 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:12:03.282880 kernel: audit: type=2000 audit(0.313:1): state=initialized audit_enabled=0 res=1 Mar 14 00:12:03.282899 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:12:03.282923 kernel: cpuidle: using governor menu Mar 14 00:12:03.282942 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:12:03.282962 kernel: ASID allocator initialised with 65536 entries Mar 14 00:12:03.282981 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:12:03.283000 kernel: Serial: AMBA PL011 UART driver Mar 14 00:12:03.283039 kernel: Modules: 17488 pages in range for non-PLT usage Mar 14 00:12:03.283060 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:12:03.283079 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:12:03.283098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:12:03.283123 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:12:03.283142 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:12:03.283161 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:12:03.283180 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:12:03.283199 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:12:03.283218 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:12:03.283237 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:12:03.283256 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:12:03.283275 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:12:03.283301 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:12:03.283320 kernel: ACPI: Interpreter enabled Mar 14 00:12:03.283339 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:12:03.283358 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:12:03.283377 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 14 00:12:03.283683 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:12:03.286313 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:12:03.286598 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:12:03.286817 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 14 00:12:03.288127 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 14 00:12:03.288172 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 14 00:12:03.288193 kernel: acpiphp: Slot [1] registered Mar 14 00:12:03.288212 kernel: acpiphp: Slot [2] registered Mar 14 00:12:03.288231 kernel: acpiphp: Slot [3] registered Mar 14 00:12:03.288250 kernel: acpiphp: Slot [4] registered Mar 14 00:12:03.288269 kernel: acpiphp: Slot [5] registered Mar 14 00:12:03.288297 kernel: acpiphp: Slot [6] registered Mar 14 00:12:03.288316 kernel: acpiphp: Slot [7] registered Mar 14 00:12:03.288335 kernel: acpiphp: Slot [8] registered Mar 14 00:12:03.288354 kernel: acpiphp: Slot [9] registered Mar 14 00:12:03.288372 kernel: acpiphp: Slot [10] registered Mar 14 00:12:03.288391 kernel: acpiphp: Slot [11] registered Mar 14 00:12:03.288411 kernel: acpiphp: Slot [12] registered Mar 14 00:12:03.288429 kernel: acpiphp: Slot [13] registered Mar 14 00:12:03.288448 kernel: acpiphp: Slot [14] registered Mar 14 00:12:03.288466 kernel: acpiphp: Slot [15] registered Mar 14 00:12:03.288490 kernel: acpiphp: Slot [16] registered Mar 14 00:12:03.288509 kernel: acpiphp: Slot [17] registered Mar 14 00:12:03.288528 kernel: acpiphp: Slot [18] registered Mar 14 00:12:03.288546 kernel: acpiphp: Slot [19] registered Mar 14 00:12:03.288565 kernel: acpiphp: Slot [20] registered Mar 14 00:12:03.288584 kernel: acpiphp: Slot [21] registered Mar 14 00:12:03.288603 kernel: acpiphp: Slot [22] registered Mar 14 00:12:03.288622 kernel: acpiphp: Slot [23] registered Mar 14 00:12:03.288640 kernel: acpiphp: Slot [24] registered Mar 14 00:12:03.288663 kernel: acpiphp: Slot [25] registered Mar 14 00:12:03.288682 kernel: acpiphp: Slot [26] registered Mar 14 00:12:03.288701 kernel: acpiphp: Slot [27] registered Mar 14 00:12:03.288720 kernel: acpiphp: Slot [28] registered Mar 14 00:12:03.288739 kernel: acpiphp: Slot [29] registered Mar 14 00:12:03.288758 kernel: acpiphp: Slot [30] registered Mar 14 00:12:03.288776 kernel: acpiphp: Slot [31] registered Mar 14 00:12:03.288795 kernel: PCI host bridge to bus 0000:00 Mar 14 00:12:03.289097 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 14 00:12:03.289301 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:12:03.289489 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 14 00:12:03.289681 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 14 00:12:03.289934 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 14 00:12:03.296276 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 14 00:12:03.296522 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 14 00:12:03.296773 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:12:03.296983 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 14 00:12:03.299295 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:12:03.299543 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:12:03.299763 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 14 00:12:03.299997 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 14 00:12:03.300267 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 14 00:12:03.300488 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:12:03.300681 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 14 00:12:03.300867 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:12:03.301075 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 14 00:12:03.301103 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:12:03.301123 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:12:03.301143 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:12:03.301162 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:12:03.301188 kernel: iommu: Default domain type: Translated Mar 14 00:12:03.301207 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:12:03.301227 kernel: efivars: Registered efivars operations Mar 14 00:12:03.301245 kernel: vgaarb: loaded Mar 14 00:12:03.301264 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:12:03.301283 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:12:03.301302 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:12:03.301321 kernel: pnp: PnP ACPI init Mar 14 00:12:03.301543 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 14 00:12:03.301575 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:12:03.301595 kernel: NET: Registered PF_INET protocol family Mar 14 00:12:03.301614 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:12:03.301633 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:12:03.301653 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:12:03.301672 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:12:03.301691 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:12:03.301710 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:12:03.301734 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:03.301753 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:12:03.301773 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:12:03.301793 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:12:03.301812 kernel: kvm [1]: HYP mode not available Mar 14 00:12:03.301834 kernel: Initialise system trusted keyrings Mar 14 00:12:03.301860 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:12:03.301880 kernel: Key type asymmetric registered Mar 14 00:12:03.301900 kernel: Asymmetric key parser 'x509' registered Mar 14 00:12:03.301925 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:12:03.301946 kernel: io scheduler mq-deadline registered Mar 14 00:12:03.301965 kernel: io scheduler kyber registered Mar 14 00:12:03.301984 kernel: io scheduler bfq registered Mar 14 00:12:03.303811 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 14 00:12:03.303851 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:12:03.303871 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:12:03.303891 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 14 00:12:03.303919 kernel: ACPI: button: Sleep Button [SLPB] Mar 14 00:12:03.303939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:12:03.303959 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:12:03.304301 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 14 00:12:03.304331 kernel: printk: console [ttyS0] disabled Mar 14 00:12:03.304351 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 14 00:12:03.304371 kernel: printk: console [ttyS0] enabled Mar 14 00:12:03.304390 kernel: printk: bootconsole [uart0] disabled Mar 14 00:12:03.304409 kernel: thunder_xcv, ver 1.0 Mar 14 00:12:03.304436 kernel: thunder_bgx, ver 1.0 Mar 14 00:12:03.304455 kernel: nicpf, ver 1.0 Mar 14 00:12:03.304474 kernel: nicvf, ver 1.0 Mar 14 00:12:03.304697 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:12:03.304902 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:12:02 UTC (1773447122) Mar 14 00:12:03.304928 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:12:03.304948 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 14 00:12:03.304967 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:12:03.304992 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:12:03.305099 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:12:03.305121 kernel: Segment Routing with IPv6 Mar 14 00:12:03.305140 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:12:03.305159 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:12:03.305178 kernel: Key type dns_resolver registered Mar 14 00:12:03.305197 kernel: registered taskstats version 1 Mar 14 00:12:03.306333 kernel: Loading compiled-in X.509 certificates Mar 14 00:12:03.306368 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:12:03.306388 kernel: Key type .fscrypt registered Mar 14 00:12:03.306418 kernel: Key type fscrypt-provisioning registered Mar 14 00:12:03.306438 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:12:03.306457 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:12:03.306476 kernel: ima: No architecture policies found Mar 14 00:12:03.306496 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:12:03.306515 kernel: clk: Disabling unused clocks Mar 14 00:12:03.306534 kernel: Freeing unused kernel memory: 39424K Mar 14 00:12:03.306553 kernel: Run /init as init process Mar 14 00:12:03.306572 kernel: with arguments: Mar 14 00:12:03.306596 kernel: /init Mar 14 00:12:03.306614 kernel: with environment: Mar 14 00:12:03.306633 kernel: HOME=/ Mar 14 00:12:03.306652 kernel: TERM=linux Mar 14 00:12:03.306676 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:12:03.306700 systemd[1]: Detected virtualization amazon. Mar 14 00:12:03.306722 systemd[1]: Detected architecture arm64. Mar 14 00:12:03.306747 systemd[1]: Running in initrd. Mar 14 00:12:03.306767 systemd[1]: No hostname configured, using default hostname. Mar 14 00:12:03.306787 systemd[1]: Hostname set to . Mar 14 00:12:03.306809 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:12:03.306829 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:12:03.306850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:03.306871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:03.306893 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:12:03.306918 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:12:03.306940 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:12:03.306962 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:12:03.306986 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:12:03.307131 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:12:03.307161 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:03.307183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:03.307212 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:12:03.307234 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:12:03.307255 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:12:03.307276 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:12:03.307297 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:03.307318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:03.307340 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:12:03.307361 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:12:03.307382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:03.307408 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:03.307429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:03.307450 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:12:03.307471 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:12:03.307492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:12:03.307545 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:12:03.307568 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:12:03.307590 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:12:03.307618 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:12:03.307639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:03.307660 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:03.307681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:03.307746 systemd-journald[251]: Collecting audit messages is disabled. Mar 14 00:12:03.307811 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:12:03.307837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:12:03.307859 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:12:03.307879 kernel: Bridge firewalling registered Mar 14 00:12:03.307904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:03.307926 systemd-journald[251]: Journal started Mar 14 00:12:03.307963 systemd-journald[251]: Runtime Journal (/run/log/journal/ec218878868491692cb6be124d13498f) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:12:03.251263 systemd-modules-load[252]: Inserted module 'overlay' Mar 14 00:12:03.309326 systemd-modules-load[252]: Inserted module 'br_netfilter' Mar 14 00:12:03.326070 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:12:03.327082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:03.331462 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:03.352370 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:03.364338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:12:03.372763 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:12:03.378557 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:12:03.418781 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:03.422411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:03.444716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:12:03.450124 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:03.474586 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:03.495487 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:12:03.536178 dracut-cmdline[291]: dracut-dracut-053 Mar 14 00:12:03.540145 systemd-resolved[284]: Positive Trust Anchors: Mar 14 00:12:03.540944 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:12:03.541051 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:12:03.581095 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:12:03.740034 kernel: SCSI subsystem initialized Mar 14 00:12:03.747052 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:12:03.761066 kernel: iscsi: registered transport (tcp) Mar 14 00:12:03.784823 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:12:03.784901 kernel: QLogic iSCSI HBA Driver Mar 14 00:12:03.833318 kernel: random: crng init done Mar 14 00:12:03.833731 systemd-resolved[284]: Defaulting to hostname 'linux'. Mar 14 00:12:03.841542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:12:03.852066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:03.879555 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:03.897440 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:12:03.936062 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:12:03.939165 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:12:03.939245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:12:04.014109 kernel: raid6: neonx8 gen() 6687 MB/s Mar 14 00:12:04.032079 kernel: raid6: neonx4 gen() 6458 MB/s Mar 14 00:12:04.049077 kernel: raid6: neonx2 gen() 5463 MB/s Mar 14 00:12:04.066075 kernel: raid6: neonx1 gen() 3924 MB/s Mar 14 00:12:04.084079 kernel: raid6: int64x8 gen() 3770 MB/s Mar 14 00:12:04.101084 kernel: raid6: int64x4 gen() 3697 MB/s Mar 14 00:12:04.118075 kernel: raid6: int64x2 gen() 3571 MB/s Mar 14 00:12:04.135078 kernel: raid6: int64x1 gen() 2752 MB/s Mar 14 00:12:04.135168 kernel: raid6: using algorithm neonx8 gen() 6687 MB/s Mar 14 00:12:04.155701 kernel: raid6: .... xor() 4790 MB/s, rmw enabled Mar 14 00:12:04.155811 kernel: raid6: using neon recovery algorithm Mar 14 00:12:04.165642 kernel: xor: measuring software checksum speed Mar 14 00:12:04.165745 kernel: 8regs : 11007 MB/sec Mar 14 00:12:04.166980 kernel: 32regs : 11944 MB/sec Mar 14 00:12:04.169467 kernel: arm64_neon : 9188 MB/sec Mar 14 00:12:04.169543 kernel: xor: using function: 32regs (11944 MB/sec) Mar 14 00:12:04.256067 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:12:04.278105 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:04.291519 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:04.329434 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 14 00:12:04.337926 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:04.362331 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:12:04.388257 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Mar 14 00:12:04.455144 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:04.468492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:12:04.595334 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:04.608433 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:12:04.649127 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:04.666890 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:04.671951 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:04.674888 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:12:04.691716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:12:04.731725 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:04.809051 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:12:04.809156 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 14 00:12:04.813603 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:12:04.813965 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:12:04.824039 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:ac:95:91:bb:5f Mar 14 00:12:04.828542 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:12:04.849274 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:04.849425 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:04.860594 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:04.864087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:04.864211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:04.867490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:04.885739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:04.903063 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:12:04.905084 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:12:04.915252 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:12:04.922103 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:12:04.922170 kernel: GPT:9289727 != 33554431 Mar 14 00:12:04.922198 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:12:04.922225 kernel: GPT:9289727 != 33554431 Mar 14 00:12:04.922250 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:12:04.922276 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:12:04.929365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:04.946481 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:12:05.004739 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:05.056047 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (539) Mar 14 00:12:05.061038 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (516) Mar 14 00:12:05.129695 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:12:05.175179 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:12:05.209947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:12:05.226249 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:12:05.237518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:12:05.254265 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:12:05.279484 disk-uuid[662]: Primary Header is updated. Mar 14 00:12:05.279484 disk-uuid[662]: Secondary Entries is updated. Mar 14 00:12:05.279484 disk-uuid[662]: Secondary Header is updated. Mar 14 00:12:05.295146 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:12:05.305062 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:12:05.315061 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:12:06.318040 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:12:06.320630 disk-uuid[663]: The operation has completed successfully. Mar 14 00:12:06.510764 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:12:06.510994 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:12:06.548278 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:12:06.568027 sh[1010]: Success Mar 14 00:12:06.593063 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:12:06.704560 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:12:06.721340 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:12:06.730859 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:12:06.751082 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:12:06.751144 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:12:06.753116 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:12:06.754600 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:12:06.755855 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:12:06.904035 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:12:06.918125 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:12:06.918794 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:12:06.933462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:12:06.943433 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:12:06.966398 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:12:06.966492 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:12:06.967974 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:12:06.986054 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:12:07.007644 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:12:07.016067 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:12:07.030375 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:12:07.048335 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:12:07.195538 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:07.220403 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:12:07.269127 systemd-networkd[1203]: lo: Link UP Mar 14 00:12:07.269655 systemd-networkd[1203]: lo: Gained carrier Mar 14 00:12:07.273274 systemd-networkd[1203]: Enumeration completed Mar 14 00:12:07.274895 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:07.274903 systemd-networkd[1203]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:12:07.276212 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:12:07.280559 systemd[1]: Reached target network.target - Network. Mar 14 00:12:07.284987 systemd-networkd[1203]: eth0: Link UP Mar 14 00:12:07.284996 systemd-networkd[1203]: eth0: Gained carrier Mar 14 00:12:07.285048 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:07.320161 systemd-networkd[1203]: eth0: DHCPv4 address 172.31.24.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:12:07.563504 ignition[1113]: Ignition 2.19.0 Mar 14 00:12:07.564138 ignition[1113]: Stage: fetch-offline Mar 14 00:12:07.565859 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:07.565885 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:12:07.568577 ignition[1113]: Ignition finished successfully Mar 14 00:12:07.579881 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:07.593532 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:12:07.626293 ignition[1212]: Ignition 2.19.0 Mar 14 00:12:07.626315 ignition[1212]: Stage: fetch Mar 14 00:12:07.626979 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:07.627561 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:12:07.627809 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:12:07.648146 ignition[1212]: PUT result: OK Mar 14 00:12:07.652298 ignition[1212]: parsed url from cmdline: "" Mar 14 00:12:07.652316 ignition[1212]: no config URL provided Mar 14 00:12:07.652335 ignition[1212]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:12:07.652363 ignition[1212]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:12:07.652402 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:12:07.663482 ignition[1212]: PUT result: OK Mar 14 00:12:07.663595 ignition[1212]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:12:07.668893 ignition[1212]: GET result: OK Mar 14 00:12:07.669157 ignition[1212]: parsing config with SHA512: bcad39a425173138a2280d833c496b0f98e9e5af648d8559f96b3a81e01f3c2bec3a16fb52eb8bd3311753d5a3a8b1fc0c1bad59e389678343d54ba929d2ac09 Mar 14 00:12:07.681392 unknown[1212]: fetched base config from "system" Mar 14 00:12:07.682324 ignition[1212]: fetch: fetch complete Mar 14 00:12:07.681417 unknown[1212]: fetched base config from "system" Mar 14 00:12:07.682338 ignition[1212]: fetch: fetch passed Mar 14 00:12:07.681432 unknown[1212]: fetched user config from "aws" Mar 14 00:12:07.682453 ignition[1212]: Ignition finished successfully Mar 14 00:12:07.687581 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:12:07.707438 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:12:07.741471 ignition[1219]: Ignition 2.19.0 Mar 14 00:12:07.744957 ignition[1219]: Stage: kargs Mar 14 00:12:07.745692 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:07.745720 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:12:07.745892 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:12:07.757832 ignition[1219]: PUT result: OK Mar 14 00:12:07.763781 ignition[1219]: kargs: kargs passed Mar 14 00:12:07.763918 ignition[1219]: Ignition finished successfully Mar 14 00:12:07.769126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:12:07.781466 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:12:07.810205 ignition[1226]: Ignition 2.19.0 Mar 14 00:12:07.810225 ignition[1226]: Stage: disks Mar 14 00:12:07.810882 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:07.810911 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:12:07.811793 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:12:07.828093 ignition[1226]: PUT result: OK Mar 14 00:12:07.835334 ignition[1226]: disks: disks passed Mar 14 00:12:07.835450 ignition[1226]: Ignition finished successfully Mar 14 00:12:07.839682 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:12:07.850569 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:07.856681 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:12:07.860363 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:12:07.863582 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:12:07.866371 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:12:07.881569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:12:07.936825 systemd-fsck[1234]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:12:07.943135 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:12:07.958202 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:12:08.057046 kernel: EXT4-fs (nvme0n1p9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:12:08.058700 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:12:08.061995 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:12:08.082221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:08.093539 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:12:08.096736 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:12:08.096838 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:12:08.096899 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:08.126735 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1253) Mar 14 00:12:08.126820 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:12:08.128842 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:12:08.132386 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:12:08.132523 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:12:08.145310 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:12:08.158063 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:12:08.159833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:08.560003 initrd-setup-root[1277]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:12:08.592330 initrd-setup-root[1284]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:12:08.616728 initrd-setup-root[1291]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:12:08.632107 initrd-setup-root[1298]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:12:08.868299 systemd-networkd[1203]: eth0: Gained IPv6LL Mar 14 00:12:09.032421 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:09.043308 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:12:09.048928 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:12:09.085131 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:12:09.085111 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:12:09.127141 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:12:09.147046 ignition[1366]: INFO : Ignition 2.19.0 Mar 14 00:12:09.147046 ignition[1366]: INFO : Stage: mount Mar 14 00:12:09.147046 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:09.147046 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:12:09.158084 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:12:09.162276 ignition[1366]: INFO : PUT result: OK Mar 14 00:12:09.172668 ignition[1366]: INFO : mount: mount passed Mar 14 00:12:09.172668 ignition[1366]: INFO : Ignition finished successfully Mar 14 00:12:09.177634 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:12:09.189226 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:12:09.224472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:12:09.248061 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1377) Mar 14 00:12:09.248144 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:12:09.248174 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:12:09.251003 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:12:09.256050 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:12:09.260320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:12:09.301410 ignition[1394]: INFO : Ignition 2.19.0 Mar 14 00:12:09.303719 ignition[1394]: INFO : Stage: files Mar 14 00:12:09.303719 ignition[1394]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:09.303719 ignition[1394]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:12:09.303719 ignition[1394]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:12:09.317190 ignition[1394]: INFO : PUT result: OK Mar 14 00:12:09.323024 ignition[1394]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:12:09.336404 ignition[1394]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:12:09.336404 ignition[1394]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:12:09.384561 ignition[1394]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:12:09.389513 ignition[1394]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:12:09.394470 unknown[1394]: wrote ssh authorized keys file for user: core Mar 14 00:12:09.397852 ignition[1394]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:12:09.409376 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:12:09.414158 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:12:09.414158 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:12:09.414158 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:12:09.513287 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:12:09.717377 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:12:09.717377 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:12:09.728129 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 14 00:12:10.008882 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 14 00:12:10.280553 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:12:10.280553 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:12:10.292489 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 14 00:12:10.752072 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 14 00:12:11.187619 ignition[1394]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:12:11.198925 ignition[1394]: INFO : files: files passed Mar 14 00:12:11.198925 ignition[1394]: INFO : Ignition finished successfully Mar 14 00:12:11.222939 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:12:11.277373 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:12:11.290310 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:12:11.302307 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:12:11.305326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:12:11.337753 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:11.337753 initrd-setup-root-after-ignition[1423]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:11.349631 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:12:11.357164 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:11.362083 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:12:11.380417 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:12:11.447088 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:12:11.450470 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:12:11.458300 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:12:11.461963 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:12:11.471831 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:12:11.483526 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:12:11.527969 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:11.550178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:12:11.579476 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:11.586805 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:11.600253 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:12:11.615727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:12:11.616053 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:12:11.625887 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:12:11.629763 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:12:11.637579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:12:11.641448 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:12:11.650617 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:12:11.653940 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:12:11.657332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:12:11.661066 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:12:11.676662 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:12:11.682958 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:12:11.685875 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:12:11.686175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:12:11.698285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:11.702157 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:11.705755 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:12:11.706054 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:11.724110 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:12:11.724729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:12:11.733981 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:12:11.734633 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:12:11.746581 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:12:11.746855 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:12:11.765515 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:12:11.774621 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:12:11.775435 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:12:11.787947 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:11.797366 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:12:11.797628 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:12:11.817632 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:12:11.820630 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:12:11.835204 ignition[1447]: INFO : Ignition 2.19.0 Mar 14 00:12:11.835204 ignition[1447]: INFO : Stage: umount Mar 14 00:12:11.843432 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:12:11.843432 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:12:11.843432 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:12:11.843432 ignition[1447]: INFO : PUT result: OK Mar 14 00:12:11.864153 ignition[1447]: INFO : umount: umount passed Mar 14 00:12:11.864153 ignition[1447]: INFO : Ignition finished successfully Mar 14 00:12:11.867414 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:12:11.867677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:12:11.874294 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:12:11.874414 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:12:11.878147 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:12:11.878261 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:12:11.882285 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:12:11.882394 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:12:11.886094 systemd[1]: Stopped target network.target - Network. Mar 14 00:12:11.886289 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:12:11.887075 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:12:11.887533 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:12:11.890688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:12:11.912738 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:11.919985 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:12:11.922821 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:12:11.925919 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:12:11.926040 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:12:11.929341 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:12:11.929435 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:12:11.932659 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:12:11.932783 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:12:11.936198 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:12:11.936314 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:12:11.940567 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:12:11.944748 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:12:11.958766 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:12:11.963767 systemd-networkd[1203]: eth0: DHCPv6 lease lost Mar 14 00:12:11.964610 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:12:11.964846 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:12:11.968357 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:12:11.968477 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:12:11.993699 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:12:11.994045 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:12:11.999637 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:12:11.999742 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:12.054499 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:12:12.092153 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:12:12.092306 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:12:12.096782 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:12.101205 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:12:12.101445 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:12:12.109310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:12:12.109527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:12.141723 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:12:12.144228 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:12.148384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:12:12.148515 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:12.182626 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:12:12.184976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:12.195064 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:12:12.195199 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:12.198660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:12:12.198745 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:12.202231 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:12:12.202354 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:12:12.206906 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:12:12.207267 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:12:12.214542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:12:12.214682 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:12:12.250325 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:12:12.264202 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:12:12.264347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:12.272289 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:12:12.272419 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:12.276657 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:12:12.276775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:12.281398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:12:12.281531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:12.286196 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:12:12.286405 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:12:12.290128 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:12:12.290357 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:12:12.312638 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:12:12.355331 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:12:12.403124 systemd[1]: Switching root. Mar 14 00:12:12.447397 systemd-journald[251]: Journal stopped Mar 14 00:12:15.385732 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 14 00:12:15.385899 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:12:15.385954 kernel: SELinux: policy capability open_perms=1 Mar 14 00:12:15.385996 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:12:15.386090 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:12:15.386132 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:12:15.386166 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:12:15.386198 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:12:15.386231 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:12:15.386267 kernel: audit: type=1403 audit(1773447133.222:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:12:15.386303 systemd[1]: Successfully loaded SELinux policy in 65.689ms. Mar 14 00:12:15.386374 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.979ms. Mar 14 00:12:15.386415 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:12:15.386451 systemd[1]: Detected virtualization amazon. Mar 14 00:12:15.386492 systemd[1]: Detected architecture arm64. Mar 14 00:12:15.386523 systemd[1]: Detected first boot. Mar 14 00:12:15.386558 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:12:15.386592 zram_generator::config[1506]: No configuration found. Mar 14 00:12:15.386644 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:12:15.386680 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:12:15.386711 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:12:15.386745 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:12:15.386788 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:12:15.386825 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:12:15.386860 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:12:15.386894 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:12:15.386926 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:12:15.386958 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:12:15.386989 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:12:15.388550 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:12:15.388616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:12:15.388653 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:12:15.388697 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:12:15.388732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:12:15.388771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:12:15.388811 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:12:15.388843 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:12:15.388875 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:12:15.388907 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:12:15.388950 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:12:15.388988 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:12:15.391672 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:12:15.391839 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:12:15.392977 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:12:15.401179 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:12:15.401220 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:12:15.401259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:12:15.401296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:12:15.401344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:12:15.401379 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:12:15.401429 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:12:15.401468 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:12:15.401502 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:12:15.401538 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:12:15.401575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:12:15.401610 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:12:15.401653 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:12:15.401686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:15.401720 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:12:15.401756 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:12:15.401791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:15.401826 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:12:15.401860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:15.401896 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:12:15.401932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:15.401977 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:12:15.402061 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 14 00:12:15.402110 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 14 00:12:15.402144 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:12:15.402177 kernel: fuse: init (API version 7.39) Mar 14 00:12:15.402210 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:12:15.402245 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:12:15.402283 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:12:15.402323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:12:15.402358 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:12:15.402393 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:12:15.402426 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:12:15.402462 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:12:15.402497 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:12:15.402532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:12:15.402567 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:12:15.402602 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:12:15.402647 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:12:15.402681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:15.402716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:15.402749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:15.402863 systemd-journald[1606]: Collecting audit messages is disabled. Mar 14 00:12:15.402932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:15.402970 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:12:15.406573 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:12:15.406666 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:12:15.406705 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:12:15.406746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:12:15.406781 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:12:15.406826 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:12:15.406865 systemd-journald[1606]: Journal started Mar 14 00:12:15.406927 systemd-journald[1606]: Runtime Journal (/run/log/journal/ec218878868491692cb6be124d13498f) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:12:15.435076 kernel: ACPI: bus type drm_connector registered Mar 14 00:12:15.435191 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:12:15.455333 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:12:15.455460 kernel: loop: module loaded Mar 14 00:12:15.491526 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:12:15.509695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:12:15.538424 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:12:15.572056 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:12:15.590058 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:12:15.607285 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:12:15.614964 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:12:15.625610 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:12:15.632152 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:12:15.637714 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:15.640569 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:15.644182 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:12:15.648656 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:12:15.653844 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:12:15.729753 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:12:15.751371 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:12:15.755319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:12:15.756210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:12:15.819607 systemd-journald[1606]: Time spent on flushing to /var/log/journal/ec218878868491692cb6be124d13498f is 119.440ms for 896 entries. Mar 14 00:12:15.819607 systemd-journald[1606]: System Journal (/var/log/journal/ec218878868491692cb6be124d13498f) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:12:15.954876 systemd-journald[1606]: Received client request to flush runtime journal. Mar 14 00:12:15.823620 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Mar 14 00:12:15.823646 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Mar 14 00:12:15.854905 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:12:15.872507 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:12:15.892664 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:12:15.909356 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:12:15.959406 udevadm[1673]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:12:15.972542 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:12:15.997252 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:12:16.010505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:12:16.071190 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Mar 14 00:12:16.072207 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Mar 14 00:12:16.093798 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:12:16.843934 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:12:16.865386 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:12:16.938291 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Mar 14 00:12:17.002672 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:12:17.036337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:12:17.097747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:12:17.185357 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 14 00:12:17.243248 (udev-worker)[1690]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:12:17.325304 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:12:17.614219 systemd-networkd[1696]: lo: Link UP Mar 14 00:12:17.614243 systemd-networkd[1696]: lo: Gained carrier Mar 14 00:12:17.614756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:12:17.622824 systemd-networkd[1696]: Enumeration completed Mar 14 00:12:17.623171 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:12:17.628399 systemd-networkd[1696]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:17.628425 systemd-networkd[1696]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:12:17.633889 systemd-networkd[1696]: eth0: Link UP Mar 14 00:12:17.634375 systemd-networkd[1696]: eth0: Gained carrier Mar 14 00:12:17.634429 systemd-networkd[1696]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:12:17.644173 systemd-networkd[1696]: eth0: DHCPv4 address 172.31.24.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:12:17.644973 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:12:17.703072 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1708) Mar 14 00:12:17.873410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:12:17.967176 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:12:18.014192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:12:18.026341 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:12:18.068105 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:12:18.111034 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:12:18.116631 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:12:18.128345 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:12:18.148072 lvm[1820]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:12:18.188126 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:12:18.192304 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:12:18.195902 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:12:18.195976 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:12:18.199296 systemd[1]: Reached target machines.target - Containers. Mar 14 00:12:18.204752 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:12:18.213657 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:12:18.229466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:12:18.233195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:18.248396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:12:18.257902 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:12:18.271454 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:12:18.284710 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:12:18.326210 kernel: loop0: detected capacity change from 0 to 209336 Mar 14 00:12:18.334840 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:12:18.369754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:12:18.373737 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:12:18.646310 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:12:18.713452 kernel: loop1: detected capacity change from 0 to 114432 Mar 14 00:12:18.834052 kernel: loop2: detected capacity change from 0 to 52536 Mar 14 00:12:18.901060 kernel: loop3: detected capacity change from 0 to 114328 Mar 14 00:12:19.040144 kernel: loop4: detected capacity change from 0 to 209336 Mar 14 00:12:19.068138 kernel: loop5: detected capacity change from 0 to 114432 Mar 14 00:12:19.093082 kernel: loop6: detected capacity change from 0 to 52536 Mar 14 00:12:19.113050 kernel: loop7: detected capacity change from 0 to 114328 Mar 14 00:12:19.129294 (sd-merge)[1844]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:12:19.130986 (sd-merge)[1844]: Merged extensions into '/usr'. Mar 14 00:12:19.163361 systemd[1]: Reloading requested from client PID 1827 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:12:19.163390 systemd[1]: Reloading... Mar 14 00:12:19.236329 systemd-networkd[1696]: eth0: Gained IPv6LL Mar 14 00:12:19.360282 zram_generator::config[1873]: No configuration found. Mar 14 00:12:19.742180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:19.750953 ldconfig[1824]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:12:19.935055 systemd[1]: Reloading finished in 770 ms. Mar 14 00:12:19.973861 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:12:19.980929 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:12:19.985808 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:12:20.010512 systemd[1]: Starting ensure-sysext.service... Mar 14 00:12:20.016980 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:12:20.036590 systemd[1]: Reloading requested from client PID 1933 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:12:20.036617 systemd[1]: Reloading... Mar 14 00:12:20.096470 systemd-tmpfiles[1934]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:12:20.098165 systemd-tmpfiles[1934]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:12:20.100918 systemd-tmpfiles[1934]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:12:20.104436 systemd-tmpfiles[1934]: ACLs are not supported, ignoring. Mar 14 00:12:20.104914 systemd-tmpfiles[1934]: ACLs are not supported, ignoring. Mar 14 00:12:20.113487 systemd-tmpfiles[1934]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:12:20.113527 systemd-tmpfiles[1934]: Skipping /boot Mar 14 00:12:20.170753 systemd-tmpfiles[1934]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:12:20.170794 systemd-tmpfiles[1934]: Skipping /boot Mar 14 00:12:20.320085 zram_generator::config[1970]: No configuration found. Mar 14 00:12:20.644434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:20.934941 systemd[1]: Reloading finished in 897 ms. Mar 14 00:12:20.981852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:12:21.013470 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:12:21.031381 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:12:21.042416 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:12:21.059416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:12:21.086516 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:12:21.107568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:21.117624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:12:21.129066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:12:21.146302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:12:21.150252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:21.179405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:21.186914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:21.220299 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:12:21.238085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:12:21.248303 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:12:21.251697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:12:21.253698 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:12:21.281407 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:12:21.314590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:12:21.320206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:12:21.329439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:12:21.329923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:12:21.338332 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:12:21.338806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:12:21.346092 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:12:21.346579 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:12:21.393310 systemd[1]: Finished ensure-sysext.service. Mar 14 00:12:21.401743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:12:21.401933 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:12:21.407175 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:12:21.461638 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:12:21.479469 augenrules[2064]: No rules Mar 14 00:12:21.491415 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:12:21.536003 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:12:21.540967 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:12:21.576752 systemd-resolved[2025]: Positive Trust Anchors: Mar 14 00:12:21.576799 systemd-resolved[2025]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:12:21.576867 systemd-resolved[2025]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:12:21.591460 systemd-resolved[2025]: Defaulting to hostname 'linux'. Mar 14 00:12:21.596225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:12:21.600767 systemd[1]: Reached target network.target - Network. Mar 14 00:12:21.603874 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:12:21.607498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:12:21.611536 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:12:21.615314 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:12:21.619181 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:12:21.624055 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:12:21.628082 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:12:21.632562 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:12:21.636805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:12:21.636874 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:12:21.639674 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:12:21.644689 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:12:21.651903 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:12:21.657886 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:12:21.669576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:12:21.673495 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:12:21.676874 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:12:21.684642 systemd[1]: System is tainted: cgroupsv1 Mar 14 00:12:21.684832 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:12:21.684951 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:12:21.692230 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:12:21.708497 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:12:21.717544 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:12:21.734180 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:12:21.752384 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:12:21.756281 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:12:21.775743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:21.816072 jq[2079]: false Mar 14 00:12:21.808347 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:12:21.830426 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:12:21.858637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:12:21.876479 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:12:21.896194 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:12:21.947450 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:12:22.001989 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:12:22.030079 coreos-metadata[2076]: Mar 14 00:12:22.028 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:12:22.038783 coreos-metadata[2076]: Mar 14 00:12:22.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:12:22.038783 coreos-metadata[2076]: Mar 14 00:12:22.035 INFO Fetch successful Mar 14 00:12:22.038783 coreos-metadata[2076]: Mar 14 00:12:22.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:12:22.039263 coreos-metadata[2076]: Mar 14 00:12:22.038 INFO Fetch successful Mar 14 00:12:22.039263 coreos-metadata[2076]: Mar 14 00:12:22.038 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:12:22.044496 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:12:22.059355 extend-filesystems[2080]: Found loop4 Mar 14 00:12:22.059355 extend-filesystems[2080]: Found loop5 Mar 14 00:12:22.059355 extend-filesystems[2080]: Found loop6 Mar 14 00:12:22.059355 extend-filesystems[2080]: Found loop7 Mar 14 00:12:22.059355 extend-filesystems[2080]: Found nvme0n1 Mar 14 00:12:22.059355 extend-filesystems[2080]: Found nvme0n1p1 Mar 14 00:12:22.059355 extend-filesystems[2080]: Found nvme0n1p2 Mar 14 00:12:22.059355 extend-filesystems[2080]: Found nvme0n1p3 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.053 INFO Fetch successful Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.053 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.057 INFO Fetch successful Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.057 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.065 INFO Fetch failed with 404: resource not found Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.065 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.081 INFO Fetch successful Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.081 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.100 INFO Fetch successful Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.100 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.119 INFO Fetch successful Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.123 INFO Fetch successful Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.123 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:12:22.160349 coreos-metadata[2076]: Mar 14 00:12:22.125 INFO Fetch successful Mar 14 00:12:22.051263 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:12:22.161754 extend-filesystems[2080]: Found usr Mar 14 00:12:22.161754 extend-filesystems[2080]: Found nvme0n1p4 Mar 14 00:12:22.161754 extend-filesystems[2080]: Found nvme0n1p6 Mar 14 00:12:22.161754 extend-filesystems[2080]: Found nvme0n1p7 Mar 14 00:12:22.161754 extend-filesystems[2080]: Found nvme0n1p9 Mar 14 00:12:22.161754 extend-filesystems[2080]: Checking size of /dev/nvme0n1p9 Mar 14 00:12:22.094165 dbus-daemon[2077]: [system] SELinux support is enabled Mar 14 00:12:22.089668 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:12:22.141657 dbus-daemon[2077]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1696 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:12:22.114477 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:12:22.126666 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: ---------------------------------------------------- Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: corporation. Support and training for ntp-4 are Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: available at https://www.nwtime.org/support Mar 14 00:12:22.249195 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: ---------------------------------------------------- Mar 14 00:12:22.247920 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:12:22.248332 ntpd[2085]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:12:22.259956 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: proto: precision = 0.096 usec (-23) Mar 14 00:12:22.259956 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: basedate set to 2026-03-01 Mar 14 00:12:22.259956 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: gps base set to 2026-03-01 (week 2408) Mar 14 00:12:22.249610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:12:22.248386 ntpd[2085]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Listen normally on 3 eth0 172.31.24.247:123 Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Listen normally on 4 lo [::1]:123 Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Listen normally on 5 eth0 [fe80::4ac:95ff:fe91:bb5f%2]:123 Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: Listening on routing socket on fd #22 for interface updates Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:12:22.273608 ntpd[2085]: 14 Mar 00:12:22 ntpd[2085]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:12:22.248410 ntpd[2085]: ---------------------------------------------------- Mar 14 00:12:22.248431 ntpd[2085]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:12:22.248452 ntpd[2085]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:12:22.248472 ntpd[2085]: corporation. Support and training for ntp-4 are Mar 14 00:12:22.248494 ntpd[2085]: available at https://www.nwtime.org/support Mar 14 00:12:22.248514 ntpd[2085]: ---------------------------------------------------- Mar 14 00:12:22.254703 ntpd[2085]: proto: precision = 0.096 usec (-23) Mar 14 00:12:22.255981 ntpd[2085]: basedate set to 2026-03-01 Mar 14 00:12:22.256098 ntpd[2085]: gps base set to 2026-03-01 (week 2408) Mar 14 00:12:22.261277 ntpd[2085]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:12:22.261412 ntpd[2085]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:12:22.261794 ntpd[2085]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:12:22.261884 ntpd[2085]: Listen normally on 3 eth0 172.31.24.247:123 Mar 14 00:12:22.261963 ntpd[2085]: Listen normally on 4 lo [::1]:123 Mar 14 00:12:22.264223 ntpd[2085]: Listen normally on 5 eth0 [fe80::4ac:95ff:fe91:bb5f%2]:123 Mar 14 00:12:22.264346 ntpd[2085]: Listening on routing socket on fd #22 for interface updates Mar 14 00:12:22.272141 ntpd[2085]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:12:22.272209 ntpd[2085]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:12:22.284230 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:12:22.292477 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:12:22.299315 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:12:22.321946 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:12:22.323254 jq[2111]: true Mar 14 00:12:22.325922 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:12:22.364567 extend-filesystems[2080]: Resized partition /dev/nvme0n1p9 Mar 14 00:12:22.393963 extend-filesystems[2134]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:12:22.436931 (ntainerd)[2131]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:12:22.469071 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:12:22.486540 jq[2133]: true Mar 14 00:12:22.529701 update_engine[2106]: I20260314 00:12:22.529215 2106 main.cc:92] Flatcar Update Engine starting Mar 14 00:12:22.537454 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:12:22.575049 update_engine[2106]: I20260314 00:12:22.574930 2106 update_check_scheduler.cc:74] Next update check in 10m8s Mar 14 00:12:22.627616 dbus-daemon[2077]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:12:22.630651 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:12:22.647070 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:12:22.648684 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:12:22.648783 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:12:22.670448 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:12:22.679579 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:12:22.679709 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:12:22.700537 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:12:22.706704 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:12:22.751532 tar[2128]: linux-arm64/LICENSE Mar 14 00:12:22.751532 tar[2128]: linux-arm64/helm Mar 14 00:12:22.771000 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:12:22.804339 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:12:22.924280 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:12:22.956469 bash[2187]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:12:22.974082 extend-filesystems[2134]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:12:22.974082 extend-filesystems[2134]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:12:22.974082 extend-filesystems[2134]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:12:22.966547 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:12:23.009628 extend-filesystems[2080]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:12:23.064865 systemd[1]: Starting sshkeys.service... Mar 14 00:12:23.073068 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:12:23.073743 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:12:23.114229 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:12:23.125901 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:12:23.171056 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2192) Mar 14 00:12:23.332117 systemd-logind[2105]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:12:23.332176 systemd-logind[2105]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 14 00:12:23.335750 systemd-logind[2105]: New seat seat0. Mar 14 00:12:23.345089 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:12:23.398025 amazon-ssm-agent[2186]: Initializing new seelog logger Mar 14 00:12:23.410750 amazon-ssm-agent[2186]: New Seelog Logger Creation Complete Mar 14 00:12:23.410750 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.410750 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.410750 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 processing appconfig overrides Mar 14 00:12:23.421177 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.421177 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.421177 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 processing appconfig overrides Mar 14 00:12:23.421177 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.421177 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.421177 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 processing appconfig overrides Mar 14 00:12:23.421177 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO Proxy environment variables: Mar 14 00:12:23.447538 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.447538 amazon-ssm-agent[2186]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:12:23.447538 amazon-ssm-agent[2186]: 2026/03/14 00:12:23 processing appconfig overrides Mar 14 00:12:23.452477 locksmithd[2166]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:12:23.507048 containerd[2131]: time="2026-03-14T00:12:23.499891216Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:12:23.559263 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO https_proxy: Mar 14 00:12:23.656439 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO http_proxy: Mar 14 00:12:23.733608 containerd[2131]: time="2026-03-14T00:12:23.733459073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:23.757398 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO no_proxy: Mar 14 00:12:23.758698 containerd[2131]: time="2026-03-14T00:12:23.758536289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:23.758813 containerd[2131]: time="2026-03-14T00:12:23.758695601Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:12:23.758813 containerd[2131]: time="2026-03-14T00:12:23.758775965Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:12:23.763401 containerd[2131]: time="2026-03-14T00:12:23.759536321Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:12:23.763401 containerd[2131]: time="2026-03-14T00:12:23.759631613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:23.763673 coreos-metadata[2225]: Mar 14 00:12:23.763 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:12:23.764339 containerd[2131]: time="2026-03-14T00:12:23.759946385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:23.764339 containerd[2131]: time="2026-03-14T00:12:23.764183513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:23.767546 coreos-metadata[2225]: Mar 14 00:12:23.765 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:12:23.767717 containerd[2131]: time="2026-03-14T00:12:23.765820493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:23.767717 containerd[2131]: time="2026-03-14T00:12:23.765912713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:23.767717 containerd[2131]: time="2026-03-14T00:12:23.765953225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:23.767717 containerd[2131]: time="2026-03-14T00:12:23.766044521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:23.767717 containerd[2131]: time="2026-03-14T00:12:23.766408865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:23.769606 containerd[2131]: time="2026-03-14T00:12:23.768456893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:12:23.770323 coreos-metadata[2225]: Mar 14 00:12:23.770 INFO Fetch successful Mar 14 00:12:23.770323 coreos-metadata[2225]: Mar 14 00:12:23.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:12:23.772043 containerd[2131]: time="2026-03-14T00:12:23.770946089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:12:23.772043 containerd[2131]: time="2026-03-14T00:12:23.771042821Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:12:23.772043 containerd[2131]: time="2026-03-14T00:12:23.771354977Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:12:23.772043 containerd[2131]: time="2026-03-14T00:12:23.771532565Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:12:23.779674 coreos-metadata[2225]: Mar 14 00:12:23.778 INFO Fetch successful Mar 14 00:12:23.787279 unknown[2225]: wrote ssh authorized keys file for user: core Mar 14 00:12:23.789751 containerd[2131]: time="2026-03-14T00:12:23.787283501Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:12:23.789751 containerd[2131]: time="2026-03-14T00:12:23.788133473Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:12:23.789751 containerd[2131]: time="2026-03-14T00:12:23.788182625Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:12:23.789751 containerd[2131]: time="2026-03-14T00:12:23.788222897Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:12:23.789751 containerd[2131]: time="2026-03-14T00:12:23.788364329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:12:23.789751 containerd[2131]: time="2026-03-14T00:12:23.788854721Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.794788133Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.797389841Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.797509973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.797583329Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.797651081Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.797735141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.797814281Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.798585 containerd[2131]: time="2026-03-14T00:12:23.797908361Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.797986661Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.802536845Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.802616789Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.802663817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.802724873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.802787093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.802833689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.802937729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.803066069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.803110265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.803175269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.803229797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.803287745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.805551 containerd[2131]: time="2026-03-14T00:12:23.803341877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.803385017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.803429621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.803486561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.803541497Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.803610773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.803667353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.803738681Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:12:23.806356 containerd[2131]: time="2026-03-14T00:12:23.804078257Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:12:23.811046 containerd[2131]: time="2026-03-14T00:12:23.808405721Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:12:23.815642 containerd[2131]: time="2026-03-14T00:12:23.814910465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:12:23.815642 containerd[2131]: time="2026-03-14T00:12:23.815045705Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:12:23.815642 containerd[2131]: time="2026-03-14T00:12:23.815131313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.815642 containerd[2131]: time="2026-03-14T00:12:23.815232677Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:12:23.815642 containerd[2131]: time="2026-03-14T00:12:23.815285141Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:12:23.815642 containerd[2131]: time="2026-03-14T00:12:23.815334269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:12:23.825321 containerd[2131]: time="2026-03-14T00:12:23.823456577Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:12:23.825321 containerd[2131]: time="2026-03-14T00:12:23.823977329Z" level=info msg="Connect containerd service" Mar 14 00:12:23.825321 containerd[2131]: time="2026-03-14T00:12:23.824416289Z" level=info msg="using legacy CRI server" Mar 14 00:12:23.825321 containerd[2131]: time="2026-03-14T00:12:23.824444789Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:12:23.826230 containerd[2131]: time="2026-03-14T00:12:23.826147997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:12:23.838243 containerd[2131]: time="2026-03-14T00:12:23.835372625Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.839427713Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.839593277Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.839899577Z" level=info msg="Start subscribing containerd event" Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.839989253Z" level=info msg="Start recovering state" Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.840190205Z" level=info msg="Start event monitor" Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.840229469Z" level=info msg="Start snapshots syncer" Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.840257057Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:12:23.841058 containerd[2131]: time="2026-03-14T00:12:23.840277841Z" level=info msg="Start streaming server" Mar 14 00:12:23.844946 containerd[2131]: time="2026-03-14T00:12:23.841958778Z" level=info msg="containerd successfully booted in 0.348706s" Mar 14 00:12:23.842182 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:12:23.862278 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:12:23.883708 dbus-daemon[2077]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:12:23.893149 dbus-daemon[2077]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2162 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:12:23.884052 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:12:23.934148 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:12:23.957409 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:12:24.020383 polkitd[2296]: Started polkitd version 121 Mar 14 00:12:24.057307 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO Agent will take identity from EC2 Mar 14 00:12:24.075579 polkitd[2296]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:12:24.076002 polkitd[2296]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:12:24.085128 polkitd[2296]: Finished loading, compiling and executing 2 rules Mar 14 00:12:24.091338 dbus-daemon[2077]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:12:24.093196 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:12:24.102116 polkitd[2296]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:12:24.114719 update-ssh-keys[2304]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:12:24.117998 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:12:24.153879 systemd[1]: Finished sshkeys.service. Mar 14 00:12:24.165286 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:12:24.262473 systemd-hostnamed[2162]: Hostname set to (transient) Mar 14 00:12:24.262482 systemd-resolved[2025]: System hostname changed to 'ip-172-31-24-247'. Mar 14 00:12:24.280055 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:12:24.389165 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:12:24.493169 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:12:24.593500 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 14 00:12:24.693314 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:12:24.742665 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:12:24.742665 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [Registrar] Starting registrar module Mar 14 00:12:24.742665 amazon-ssm-agent[2186]: 2026-03-14 00:12:23 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:12:24.742886 amazon-ssm-agent[2186]: 2026-03-14 00:12:24 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:12:24.742886 amazon-ssm-agent[2186]: 2026-03-14 00:12:24 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:12:24.742886 amazon-ssm-agent[2186]: 2026-03-14 00:12:24 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:12:24.742886 amazon-ssm-agent[2186]: 2026-03-14 00:12:24 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:12:24.793364 amazon-ssm-agent[2186]: 2026-03-14 00:12:24 INFO [CredentialRefresher] Next credential rotation will be in 32.1249901178 minutes Mar 14 00:12:24.998439 sshd_keygen[2130]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:12:25.053805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:12:25.075332 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:12:25.110463 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:12:25.117127 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:12:25.123179 tar[2128]: linux-arm64/README.md Mar 14 00:12:25.132616 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:12:25.172434 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:12:25.188953 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:12:25.205756 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:12:25.222670 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:12:25.228198 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:12:25.784614 amazon-ssm-agent[2186]: 2026-03-14 00:12:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:12:25.885061 amazon-ssm-agent[2186]: 2026-03-14 00:12:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2362) started Mar 14 00:12:25.985788 amazon-ssm-agent[2186]: 2026-03-14 00:12:25 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:12:28.210477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:28.215990 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:12:28.222041 systemd[1]: Startup finished in 11.682s (kernel) + 15.063s (userspace) = 26.745s. Mar 14 00:12:28.236376 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:29.077373 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:12:29.086576 systemd[1]: Started sshd@0-172.31.24.247:22-68.220.241.50:38204.service - OpenSSH per-connection server daemon (68.220.241.50:38204). Mar 14 00:12:29.502741 systemd-resolved[2025]: Clock change detected. Flushing caches. Mar 14 00:12:29.855200 sshd[2389]: Accepted publickey for core from 68.220.241.50 port 38204 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:12:29.859813 sshd[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:29.888662 systemd-logind[2105]: New session 1 of user core. Mar 14 00:12:29.892005 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:12:29.903682 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:12:29.936886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:12:29.955551 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:12:29.978404 (systemd)[2395]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:12:30.163860 kubelet[2380]: E0314 00:12:30.163548 2380 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:30.172146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:30.172610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:30.234899 systemd[2395]: Queued start job for default target default.target. Mar 14 00:12:30.235663 systemd[2395]: Created slice app.slice - User Application Slice. Mar 14 00:12:30.235720 systemd[2395]: Reached target paths.target - Paths. Mar 14 00:12:30.235753 systemd[2395]: Reached target timers.target - Timers. Mar 14 00:12:30.244250 systemd[2395]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:12:30.261279 systemd[2395]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:12:30.261417 systemd[2395]: Reached target sockets.target - Sockets. Mar 14 00:12:30.261450 systemd[2395]: Reached target basic.target - Basic System. Mar 14 00:12:30.261553 systemd[2395]: Reached target default.target - Main User Target. Mar 14 00:12:30.261615 systemd[2395]: Startup finished in 268ms. Mar 14 00:12:30.262397 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:12:30.268588 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:12:30.635966 systemd[1]: Started sshd@1-172.31.24.247:22-68.220.241.50:38206.service - OpenSSH per-connection server daemon (68.220.241.50:38206). Mar 14 00:12:31.150088 sshd[2411]: Accepted publickey for core from 68.220.241.50 port 38206 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:12:31.152761 sshd[2411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:31.163423 systemd-logind[2105]: New session 2 of user core. Mar 14 00:12:31.171806 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:12:31.512651 sshd[2411]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:31.520650 systemd[1]: sshd@1-172.31.24.247:22-68.220.241.50:38206.service: Deactivated successfully. Mar 14 00:12:31.525868 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:12:31.527688 systemd-logind[2105]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:12:31.529841 systemd-logind[2105]: Removed session 2. Mar 14 00:12:31.602618 systemd[1]: Started sshd@2-172.31.24.247:22-68.220.241.50:38220.service - OpenSSH per-connection server daemon (68.220.241.50:38220). Mar 14 00:12:32.107090 sshd[2419]: Accepted publickey for core from 68.220.241.50 port 38220 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:12:32.108851 sshd[2419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:32.118129 systemd-logind[2105]: New session 3 of user core. Mar 14 00:12:32.131531 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:12:32.459402 sshd[2419]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:32.467889 systemd[1]: sshd@2-172.31.24.247:22-68.220.241.50:38220.service: Deactivated successfully. Mar 14 00:12:32.474212 systemd-logind[2105]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:12:32.475318 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:12:32.477206 systemd-logind[2105]: Removed session 3. Mar 14 00:12:32.542605 systemd[1]: Started sshd@3-172.31.24.247:22-68.220.241.50:33778.service - OpenSSH per-connection server daemon (68.220.241.50:33778). Mar 14 00:12:33.049075 sshd[2427]: Accepted publickey for core from 68.220.241.50 port 33778 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:12:33.051159 sshd[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:33.060680 systemd-logind[2105]: New session 4 of user core. Mar 14 00:12:33.066683 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:12:33.403498 sshd[2427]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:33.410978 systemd[1]: sshd@3-172.31.24.247:22-68.220.241.50:33778.service: Deactivated successfully. Mar 14 00:12:33.416129 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:12:33.417541 systemd-logind[2105]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:12:33.420289 systemd-logind[2105]: Removed session 4. Mar 14 00:12:33.496511 systemd[1]: Started sshd@4-172.31.24.247:22-68.220.241.50:33780.service - OpenSSH per-connection server daemon (68.220.241.50:33780). Mar 14 00:12:33.986946 sshd[2435]: Accepted publickey for core from 68.220.241.50 port 33780 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:12:33.989725 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:33.998994 systemd-logind[2105]: New session 5 of user core. Mar 14 00:12:34.005571 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:12:34.310986 sudo[2439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:12:34.311658 sudo[2439]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:34.328102 sudo[2439]: pam_unix(sudo:session): session closed for user root Mar 14 00:12:34.406495 sshd[2435]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:34.416115 systemd-logind[2105]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:12:34.418608 systemd[1]: sshd@4-172.31.24.247:22-68.220.241.50:33780.service: Deactivated successfully. Mar 14 00:12:34.423537 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:12:34.425927 systemd-logind[2105]: Removed session 5. Mar 14 00:12:34.490570 systemd[1]: Started sshd@5-172.31.24.247:22-68.220.241.50:33796.service - OpenSSH per-connection server daemon (68.220.241.50:33796). Mar 14 00:12:35.004085 sshd[2444]: Accepted publickey for core from 68.220.241.50 port 33796 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:12:35.006244 sshd[2444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:35.015487 systemd-logind[2105]: New session 6 of user core. Mar 14 00:12:35.023637 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:12:35.286301 sudo[2449]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:12:35.287016 sudo[2449]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:35.294521 sudo[2449]: pam_unix(sudo:session): session closed for user root Mar 14 00:12:35.305191 sudo[2448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:12:35.305965 sudo[2448]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:35.332669 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:12:35.338418 auditctl[2452]: No rules Mar 14 00:12:35.339582 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:12:35.340174 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:12:35.350778 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:12:35.414809 augenrules[2471]: No rules Mar 14 00:12:35.417599 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:12:35.421547 sudo[2448]: pam_unix(sudo:session): session closed for user root Mar 14 00:12:35.503369 sshd[2444]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:35.508624 systemd[1]: sshd@5-172.31.24.247:22-68.220.241.50:33796.service: Deactivated successfully. Mar 14 00:12:35.515316 systemd-logind[2105]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:12:35.516693 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:12:35.518352 systemd-logind[2105]: Removed session 6. Mar 14 00:12:35.588539 systemd[1]: Started sshd@6-172.31.24.247:22-68.220.241.50:33800.service - OpenSSH per-connection server daemon (68.220.241.50:33800). Mar 14 00:12:36.099087 sshd[2480]: Accepted publickey for core from 68.220.241.50 port 33800 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:12:36.100931 sshd[2480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:36.108588 systemd-logind[2105]: New session 7 of user core. Mar 14 00:12:36.116627 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:12:36.381871 sudo[2484]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:12:36.383889 sudo[2484]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:37.028580 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:12:37.040801 (dockerd)[2499]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:12:37.620250 dockerd[2499]: time="2026-03-14T00:12:37.620145088Z" level=info msg="Starting up" Mar 14 00:12:37.804294 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport428087124-merged.mount: Deactivated successfully. Mar 14 00:12:38.095624 dockerd[2499]: time="2026-03-14T00:12:38.095338658Z" level=info msg="Loading containers: start." Mar 14 00:12:38.301087 kernel: Initializing XFRM netlink socket Mar 14 00:12:38.368636 (udev-worker)[2564]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:12:38.456337 systemd-networkd[1696]: docker0: Link UP Mar 14 00:12:38.492819 dockerd[2499]: time="2026-03-14T00:12:38.491585380Z" level=info msg="Loading containers: done." Mar 14 00:12:38.532011 dockerd[2499]: time="2026-03-14T00:12:38.531948748Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:12:38.532530 dockerd[2499]: time="2026-03-14T00:12:38.532486024Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:12:38.532884 dockerd[2499]: time="2026-03-14T00:12:38.532838944Z" level=info msg="Daemon has completed initialization" Mar 14 00:12:38.608071 dockerd[2499]: time="2026-03-14T00:12:38.607820981Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:12:38.610418 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:12:40.117873 containerd[2131]: time="2026-03-14T00:12:40.117354628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:12:40.264832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:12:40.275389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:40.651537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:40.663753 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:40.761378 kubelet[2649]: E0314 00:12:40.760201 2649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:40.767998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:40.768468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:40.854427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3026701187.mount: Deactivated successfully. Mar 14 00:12:43.063941 containerd[2131]: time="2026-03-14T00:12:43.063878023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:43.066864 containerd[2131]: time="2026-03-14T00:12:43.066123859Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 14 00:12:43.066864 containerd[2131]: time="2026-03-14T00:12:43.066794923Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:43.073083 containerd[2131]: time="2026-03-14T00:12:43.072928591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:43.075570 containerd[2131]: time="2026-03-14T00:12:43.075516211Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.958102927s" Mar 14 00:12:43.076133 containerd[2131]: time="2026-03-14T00:12:43.075759739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 14 00:12:43.077308 containerd[2131]: time="2026-03-14T00:12:43.077246467Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:12:45.395637 containerd[2131]: time="2026-03-14T00:12:45.395561554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:45.397779 containerd[2131]: time="2026-03-14T00:12:45.397678546Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 14 00:12:45.399554 containerd[2131]: time="2026-03-14T00:12:45.398731438Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:45.405633 containerd[2131]: time="2026-03-14T00:12:45.405538618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:45.409762 containerd[2131]: time="2026-03-14T00:12:45.408863567Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 2.331550152s" Mar 14 00:12:45.409762 containerd[2131]: time="2026-03-14T00:12:45.408944771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 14 00:12:45.410497 containerd[2131]: time="2026-03-14T00:12:45.410448419Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:12:47.336920 containerd[2131]: time="2026-03-14T00:12:47.336850296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:47.339304 containerd[2131]: time="2026-03-14T00:12:47.339235272Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 14 00:12:47.341340 containerd[2131]: time="2026-03-14T00:12:47.341212584Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:47.351970 containerd[2131]: time="2026-03-14T00:12:47.351699600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:47.357572 containerd[2131]: time="2026-03-14T00:12:47.357495684Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.946794149s" Mar 14 00:12:47.357933 containerd[2131]: time="2026-03-14T00:12:47.357752976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 14 00:12:47.358731 containerd[2131]: time="2026-03-14T00:12:47.358646208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:12:48.942184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637498258.mount: Deactivated successfully. Mar 14 00:12:49.621559 containerd[2131]: time="2026-03-14T00:12:49.621474927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:49.624067 containerd[2131]: time="2026-03-14T00:12:49.623639031Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 14 00:12:49.626923 containerd[2131]: time="2026-03-14T00:12:49.626330091Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:49.631474 containerd[2131]: time="2026-03-14T00:12:49.631397919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:49.633427 containerd[2131]: time="2026-03-14T00:12:49.633365979Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 2.274633011s" Mar 14 00:12:49.633687 containerd[2131]: time="2026-03-14T00:12:49.633641583Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 14 00:12:49.634920 containerd[2131]: time="2026-03-14T00:12:49.634874847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:12:50.280010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302464478.mount: Deactivated successfully. Mar 14 00:12:51.014854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:12:51.030417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:51.411364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:51.429174 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:51.544544 kubelet[2793]: E0314 00:12:51.543326 2793 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:51.552617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:51.554126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:52.096104 containerd[2131]: time="2026-03-14T00:12:52.095731660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:52.098321 containerd[2131]: time="2026-03-14T00:12:52.098259256Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 14 00:12:52.099077 containerd[2131]: time="2026-03-14T00:12:52.098769496Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:52.105804 containerd[2131]: time="2026-03-14T00:12:52.105744064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:52.110101 containerd[2131]: time="2026-03-14T00:12:52.108724624Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.473476889s" Mar 14 00:12:52.110101 containerd[2131]: time="2026-03-14T00:12:52.108805672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 14 00:12:52.110101 containerd[2131]: time="2026-03-14T00:12:52.109724800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:12:52.590795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619627407.mount: Deactivated successfully. Mar 14 00:12:52.599767 containerd[2131]: time="2026-03-14T00:12:52.598147062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:52.601643 containerd[2131]: time="2026-03-14T00:12:52.601579062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 14 00:12:52.603507 containerd[2131]: time="2026-03-14T00:12:52.603442650Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:52.607945 containerd[2131]: time="2026-03-14T00:12:52.607865118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:52.610301 containerd[2131]: time="2026-03-14T00:12:52.610236534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 500.428526ms" Mar 14 00:12:52.610574 containerd[2131]: time="2026-03-14T00:12:52.610528854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 14 00:12:52.611736 containerd[2131]: time="2026-03-14T00:12:52.611677098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:12:53.228667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067428704.mount: Deactivated successfully. Mar 14 00:12:54.551594 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:12:55.268469 containerd[2131]: time="2026-03-14T00:12:55.268370155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:55.270947 containerd[2131]: time="2026-03-14T00:12:55.270855079Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 14 00:12:55.273835 containerd[2131]: time="2026-03-14T00:12:55.273158300Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:55.280368 containerd[2131]: time="2026-03-14T00:12:55.280273844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:55.284182 containerd[2131]: time="2026-03-14T00:12:55.283051856Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.671276082s" Mar 14 00:12:55.284182 containerd[2131]: time="2026-03-14T00:12:55.283126784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 14 00:13:01.764782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:13:01.774417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:02.171501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:02.181843 (kubelet)[2910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:02.259998 kubelet[2910]: E0314 00:13:02.259930 2910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:02.266509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:02.266912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:03.017115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:03.025541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:03.090218 systemd[1]: Reloading requested from client PID 2926 ('systemctl') (unit session-7.scope)... Mar 14 00:13:03.090254 systemd[1]: Reloading... Mar 14 00:13:03.300119 zram_generator::config[2969]: No configuration found. Mar 14 00:13:03.591241 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:03.778172 systemd[1]: Reloading finished in 687 ms. Mar 14 00:13:03.878243 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:13:03.878830 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:13:03.879624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:03.890551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:04.686442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:04.694246 (kubelet)[3041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:13:04.768006 kubelet[3041]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:04.768006 kubelet[3041]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:13:04.768006 kubelet[3041]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:04.768006 kubelet[3041]: I0314 00:13:04.766553 3041 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:13:06.489684 kubelet[3041]: I0314 00:13:06.489605 3041 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:13:06.489684 kubelet[3041]: I0314 00:13:06.489664 3041 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:13:06.495096 kubelet[3041]: I0314 00:13:06.494579 3041 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:13:06.546193 kubelet[3041]: E0314 00:13:06.546125 3041 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.247:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:13:06.548173 kubelet[3041]: I0314 00:13:06.548114 3041 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:13:06.567469 kubelet[3041]: E0314 00:13:06.567340 3041 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:13:06.567469 kubelet[3041]: I0314 00:13:06.567453 3041 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:13:06.574622 kubelet[3041]: I0314 00:13:06.574558 3041 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:13:06.575591 kubelet[3041]: I0314 00:13:06.575529 3041 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:13:06.575881 kubelet[3041]: I0314 00:13:06.575588 3041 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:13:06.575881 kubelet[3041]: I0314 00:13:06.575880 3041 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:13:06.576170 kubelet[3041]: I0314 00:13:06.575903 3041 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:13:06.576322 kubelet[3041]: I0314 00:13:06.576283 3041 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:13:06.582430 kubelet[3041]: I0314 00:13:06.582354 3041 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:13:06.582430 kubelet[3041]: I0314 00:13:06.582432 3041 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:13:06.584116 kubelet[3041]: I0314 00:13:06.583427 3041 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:13:06.585909 kubelet[3041]: I0314 00:13:06.585845 3041 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:13:06.594749 kubelet[3041]: E0314 00:13:06.594688 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-247&limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:13:06.596173 kubelet[3041]: I0314 00:13:06.595186 3041 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:13:06.596616 kubelet[3041]: I0314 00:13:06.596582 3041 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:13:06.597009 kubelet[3041]: W0314 00:13:06.596985 3041 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:13:06.608969 kubelet[3041]: I0314 00:13:06.608914 3041 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:13:06.609311 kubelet[3041]: I0314 00:13:06.609273 3041 server.go:1289] "Started kubelet" Mar 14 00:13:06.613152 kubelet[3041]: E0314 00:13:06.613012 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:13:06.613335 kubelet[3041]: I0314 00:13:06.613163 3041 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:13:06.620081 kubelet[3041]: I0314 00:13:06.619382 3041 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:13:06.620081 kubelet[3041]: I0314 00:13:06.619361 3041 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:13:06.620081 kubelet[3041]: I0314 00:13:06.620002 3041 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:13:06.628643 kubelet[3041]: I0314 00:13:06.628556 3041 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:13:06.632099 kubelet[3041]: E0314 00:13:06.627856 3041 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.247:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.247:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-247.189c8cd1f1c22068 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-247,UID:ip-172-31-24-247,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-247,},FirstTimestamp:2026-03-14 00:13:06.60921764 +0000 UTC m=+1.907734835,LastTimestamp:2026-03-14 00:13:06.60921764 +0000 UTC m=+1.907734835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-247,}" Mar 14 00:13:06.632677 kubelet[3041]: I0314 00:13:06.632634 3041 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:13:06.637243 kubelet[3041]: I0314 00:13:06.637203 3041 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:13:06.637852 kubelet[3041]: E0314 00:13:06.637792 3041 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-247\" not found" Mar 14 00:13:06.639844 kubelet[3041]: I0314 00:13:06.639118 3041 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:13:06.639844 kubelet[3041]: I0314 00:13:06.639229 3041 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:13:06.641900 kubelet[3041]: E0314 00:13:06.641821 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:13:06.642195 kubelet[3041]: E0314 00:13:06.642077 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-247?timeout=10s\": dial tcp 172.31.24.247:6443: connect: connection refused" interval="200ms" Mar 14 00:13:06.644328 kubelet[3041]: E0314 00:13:06.644278 3041 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:13:06.650515 kubelet[3041]: I0314 00:13:06.650451 3041 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:13:06.650515 kubelet[3041]: I0314 00:13:06.650492 3041 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:13:06.650696 kubelet[3041]: I0314 00:13:06.650665 3041 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:13:06.709776 kubelet[3041]: I0314 00:13:06.709737 3041 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:13:06.710022 kubelet[3041]: I0314 00:13:06.709989 3041 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:13:06.710778 kubelet[3041]: I0314 00:13:06.710745 3041 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:13:06.711575 kubelet[3041]: I0314 00:13:06.711498 3041 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:13:06.713983 kubelet[3041]: I0314 00:13:06.713460 3041 policy_none.go:49] "None policy: Start" Mar 14 00:13:06.713983 kubelet[3041]: I0314 00:13:06.713526 3041 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:13:06.713983 kubelet[3041]: I0314 00:13:06.713558 3041 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:13:06.714292 kubelet[3041]: I0314 00:13:06.714005 3041 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:13:06.714292 kubelet[3041]: I0314 00:13:06.714083 3041 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:13:06.714292 kubelet[3041]: I0314 00:13:06.714120 3041 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:13:06.714292 kubelet[3041]: I0314 00:13:06.714134 3041 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:13:06.714292 kubelet[3041]: E0314 00:13:06.714209 3041 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:13:06.723989 kubelet[3041]: E0314 00:13:06.723934 3041 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:13:06.729745 kubelet[3041]: E0314 00:13:06.729678 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:13:06.730598 kubelet[3041]: I0314 00:13:06.730546 3041 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:13:06.730791 kubelet[3041]: I0314 00:13:06.730588 3041 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:13:06.737438 kubelet[3041]: I0314 00:13:06.737381 3041 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:13:06.743303 kubelet[3041]: E0314 00:13:06.741703 3041 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:13:06.743303 kubelet[3041]: E0314 00:13:06.741796 3041 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-247\" not found" Mar 14 00:13:06.828079 kubelet[3041]: E0314 00:13:06.827519 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:06.833840 kubelet[3041]: I0314 00:13:06.833774 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-247" Mar 14 00:13:06.837015 kubelet[3041]: E0314 00:13:06.835084 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.247:6443/api/v1/nodes\": dial tcp 172.31.24.247:6443: connect: connection refused" node="ip-172-31-24-247" Mar 14 00:13:06.841535 kubelet[3041]: E0314 00:13:06.841461 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:06.844360 kubelet[3041]: E0314 00:13:06.844303 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-247?timeout=10s\": dial tcp 172.31.24.247:6443: connect: connection refused" interval="400ms" Mar 14 00:13:06.853810 kubelet[3041]: E0314 00:13:06.853755 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:06.940786 kubelet[3041]: I0314 00:13:06.940728 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c122da292975f7c5b4c8b9bd6ee2c07-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-247\" (UID: \"7c122da292975f7c5b4c8b9bd6ee2c07\") " pod="kube-system/kube-scheduler-ip-172-31-24-247" Mar 14 00:13:06.941252 kubelet[3041]: I0314 00:13:06.941211 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b1a2b988bfa7ac0c3a3e2d33e17c2d4-ca-certs\") pod \"kube-apiserver-ip-172-31-24-247\" (UID: \"2b1a2b988bfa7ac0c3a3e2d33e17c2d4\") " pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:06.941676 kubelet[3041]: I0314 00:13:06.941574 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:06.941977 kubelet[3041]: I0314 00:13:06.941889 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b1a2b988bfa7ac0c3a3e2d33e17c2d4-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-247\" (UID: \"2b1a2b988bfa7ac0c3a3e2d33e17c2d4\") " pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:06.942277 kubelet[3041]: I0314 00:13:06.942195 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b1a2b988bfa7ac0c3a3e2d33e17c2d4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-247\" (UID: \"2b1a2b988bfa7ac0c3a3e2d33e17c2d4\") " pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:06.942517 kubelet[3041]: I0314 00:13:06.942437 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:06.942786 kubelet[3041]: I0314 00:13:06.942704 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:06.943095 kubelet[3041]: I0314 00:13:06.942977 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:06.943362 kubelet[3041]: I0314 00:13:06.943283 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:07.039588 kubelet[3041]: I0314 00:13:07.038858 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-247" Mar 14 00:13:07.039588 kubelet[3041]: E0314 00:13:07.039489 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.247:6443/api/v1/nodes\": dial tcp 172.31.24.247:6443: connect: connection refused" node="ip-172-31-24-247" Mar 14 00:13:07.129628 containerd[2131]: time="2026-03-14T00:13:07.129556674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-247,Uid:7c122da292975f7c5b4c8b9bd6ee2c07,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:07.145410 containerd[2131]: time="2026-03-14T00:13:07.144992322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-247,Uid:2b1a2b988bfa7ac0c3a3e2d33e17c2d4,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:07.156289 containerd[2131]: time="2026-03-14T00:13:07.156183307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-247,Uid:6e2b0d10ecb1403d9e96074120f3f742,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:07.245988 kubelet[3041]: E0314 00:13:07.245858 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-247?timeout=10s\": dial tcp 172.31.24.247:6443: connect: connection refused" interval="800ms" Mar 14 00:13:07.442250 kubelet[3041]: I0314 00:13:07.442106 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-247" Mar 14 00:13:07.443322 kubelet[3041]: E0314 00:13:07.443202 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.247:6443/api/v1/nodes\": dial tcp 172.31.24.247:6443: connect: connection refused" node="ip-172-31-24-247" Mar 14 00:13:07.644188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556462642.mount: Deactivated successfully. Mar 14 00:13:07.652816 containerd[2131]: time="2026-03-14T00:13:07.652680537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:07.655654 containerd[2131]: time="2026-03-14T00:13:07.655562709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:07.664565 containerd[2131]: time="2026-03-14T00:13:07.664464393Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:07.665284 containerd[2131]: time="2026-03-14T00:13:07.665177121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:13:07.668534 containerd[2131]: time="2026-03-14T00:13:07.668344077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:13:07.668534 containerd[2131]: time="2026-03-14T00:13:07.668426289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 14 00:13:07.672098 containerd[2131]: time="2026-03-14T00:13:07.670449153Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:07.680532 containerd[2131]: time="2026-03-14T00:13:07.680445969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:07.684632 containerd[2131]: time="2026-03-14T00:13:07.684546777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.862183ms" Mar 14 00:13:07.689858 containerd[2131]: time="2026-03-14T00:13:07.689788449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.639695ms" Mar 14 00:13:07.693778 containerd[2131]: time="2026-03-14T00:13:07.692838609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.526386ms" Mar 14 00:13:07.711080 kubelet[3041]: E0314 00:13:07.709830 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:13:07.723498 kubelet[3041]: E0314 00:13:07.722965 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:13:07.739752 update_engine[2106]: I20260314 00:13:07.739658 2106 update_attempter.cc:509] Updating boot flags... Mar 14 00:13:07.843164 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3097) Mar 14 00:13:07.876391 kubelet[3041]: E0314 00:13:07.876301 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:13:07.909087 kubelet[3041]: E0314 00:13:07.907994 3041 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-247&limit=500&resourceVersion=0\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:13:08.052094 kubelet[3041]: E0314 00:13:08.051782 3041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-247?timeout=10s\": dial tcp 172.31.24.247:6443: connect: connection refused" interval="1.6s" Mar 14 00:13:08.124174 containerd[2131]: time="2026-03-14T00:13:08.123643627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:08.124174 containerd[2131]: time="2026-03-14T00:13:08.123784783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:08.124174 containerd[2131]: time="2026-03-14T00:13:08.123818179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:08.130201 containerd[2131]: time="2026-03-14T00:13:08.129829771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:08.144256 containerd[2131]: time="2026-03-14T00:13:08.142927003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:08.150931 containerd[2131]: time="2026-03-14T00:13:08.147324775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:08.150931 containerd[2131]: time="2026-03-14T00:13:08.147818263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:08.150931 containerd[2131]: time="2026-03-14T00:13:08.145022071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:08.154741 containerd[2131]: time="2026-03-14T00:13:08.154568995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:08.156085 containerd[2131]: time="2026-03-14T00:13:08.155247043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:08.156085 containerd[2131]: time="2026-03-14T00:13:08.155318251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:08.158231 containerd[2131]: time="2026-03-14T00:13:08.155540287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:08.254853 kubelet[3041]: I0314 00:13:08.254615 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-247" Mar 14 00:13:08.259080 kubelet[3041]: E0314 00:13:08.258676 3041 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.247:6443/api/v1/nodes\": dial tcp 172.31.24.247:6443: connect: connection refused" node="ip-172-31-24-247" Mar 14 00:13:08.364108 containerd[2131]: time="2026-03-14T00:13:08.363192825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-247,Uid:2b1a2b988bfa7ac0c3a3e2d33e17c2d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddf249c227de4341aee1f86ee90e2b8e5a9d3c51470c9c2c77b41ff3762828de\"" Mar 14 00:13:08.374627 containerd[2131]: time="2026-03-14T00:13:08.374565429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-247,Uid:7c122da292975f7c5b4c8b9bd6ee2c07,Namespace:kube-system,Attempt:0,} returns sandbox id \"cde0e43e856853a0b311fbe76837ac7dc368b028f6a0b53bf30944d1711b4723\"" Mar 14 00:13:08.381308 containerd[2131]: time="2026-03-14T00:13:08.381239577Z" level=info msg="CreateContainer within sandbox \"ddf249c227de4341aee1f86ee90e2b8e5a9d3c51470c9c2c77b41ff3762828de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:13:08.386645 containerd[2131]: time="2026-03-14T00:13:08.386392341Z" level=info msg="CreateContainer within sandbox \"cde0e43e856853a0b311fbe76837ac7dc368b028f6a0b53bf30944d1711b4723\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:13:08.402535 containerd[2131]: time="2026-03-14T00:13:08.402457845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-247,Uid:6e2b0d10ecb1403d9e96074120f3f742,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9a1187bfbd24a8b39330214835805d9caa9944b7aeecbb9e91522bbaa69e8fb\"" Mar 14 00:13:08.413769 containerd[2131]: time="2026-03-14T00:13:08.413696193Z" level=info msg="CreateContainer within sandbox \"ddf249c227de4341aee1f86ee90e2b8e5a9d3c51470c9c2c77b41ff3762828de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"944fe34c8bda085aa6daa99a91a1bf90e2c702bb0a197938cb27eef150b8926b\"" Mar 14 00:13:08.415533 containerd[2131]: time="2026-03-14T00:13:08.415463133Z" level=info msg="StartContainer for \"944fe34c8bda085aa6daa99a91a1bf90e2c702bb0a197938cb27eef150b8926b\"" Mar 14 00:13:08.417466 containerd[2131]: time="2026-03-14T00:13:08.417359937Z" level=info msg="CreateContainer within sandbox \"c9a1187bfbd24a8b39330214835805d9caa9944b7aeecbb9e91522bbaa69e8fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:13:08.421476 containerd[2131]: time="2026-03-14T00:13:08.421396713Z" level=info msg="CreateContainer within sandbox \"cde0e43e856853a0b311fbe76837ac7dc368b028f6a0b53bf30944d1711b4723\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4886083bc7ca292ef6524379713edce293d166ac9dc7e8d3ebf0d89eb81b75e\"" Mar 14 00:13:08.423336 containerd[2131]: time="2026-03-14T00:13:08.423204705Z" level=info msg="StartContainer for \"c4886083bc7ca292ef6524379713edce293d166ac9dc7e8d3ebf0d89eb81b75e\"" Mar 14 00:13:08.453430 containerd[2131]: time="2026-03-14T00:13:08.453210705Z" level=info msg="CreateContainer within sandbox \"c9a1187bfbd24a8b39330214835805d9caa9944b7aeecbb9e91522bbaa69e8fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"58566ec2d83091d1d59eff6cdfb7213482e4da71ac56a8ff8932f9e5b8737807\"" Mar 14 00:13:08.454882 containerd[2131]: time="2026-03-14T00:13:08.454716093Z" level=info msg="StartContainer for \"58566ec2d83091d1d59eff6cdfb7213482e4da71ac56a8ff8932f9e5b8737807\"" Mar 14 00:13:08.635005 kubelet[3041]: E0314 00:13:08.633673 3041 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.247:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.247:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:13:08.667458 containerd[2131]: time="2026-03-14T00:13:08.667402066Z" level=info msg="StartContainer for \"944fe34c8bda085aa6daa99a91a1bf90e2c702bb0a197938cb27eef150b8926b\" returns successfully" Mar 14 00:13:08.685900 containerd[2131]: time="2026-03-14T00:13:08.685549966Z" level=info msg="StartContainer for \"c4886083bc7ca292ef6524379713edce293d166ac9dc7e8d3ebf0d89eb81b75e\" returns successfully" Mar 14 00:13:08.704744 containerd[2131]: time="2026-03-14T00:13:08.704664766Z" level=info msg="StartContainer for \"58566ec2d83091d1d59eff6cdfb7213482e4da71ac56a8ff8932f9e5b8737807\" returns successfully" Mar 14 00:13:08.775256 kubelet[3041]: E0314 00:13:08.774643 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:08.794405 kubelet[3041]: E0314 00:13:08.793521 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:08.801125 kubelet[3041]: E0314 00:13:08.800569 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:09.805233 kubelet[3041]: E0314 00:13:09.804870 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:09.812941 kubelet[3041]: E0314 00:13:09.809946 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:09.812941 kubelet[3041]: E0314 00:13:09.810882 3041 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:09.868527 kubelet[3041]: I0314 00:13:09.867711 3041 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-247" Mar 14 00:13:12.036064 kubelet[3041]: E0314 00:13:12.035502 3041 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-247\" not found" node="ip-172-31-24-247" Mar 14 00:13:12.135120 kubelet[3041]: E0314 00:13:12.133280 3041 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-247.189c8cd1f1c22068 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-247,UID:ip-172-31-24-247,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-247,},FirstTimestamp:2026-03-14 00:13:06.60921764 +0000 UTC m=+1.907734835,LastTimestamp:2026-03-14 00:13:06.60921764 +0000 UTC m=+1.907734835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-247,}" Mar 14 00:13:12.148085 kubelet[3041]: I0314 00:13:12.146729 3041 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-247" Mar 14 00:13:12.239295 kubelet[3041]: I0314 00:13:12.239245 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-247" Mar 14 00:13:12.291094 kubelet[3041]: E0314 00:13:12.290333 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-247" Mar 14 00:13:12.291094 kubelet[3041]: I0314 00:13:12.290389 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:12.301394 kubelet[3041]: E0314 00:13:12.300574 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:12.301394 kubelet[3041]: I0314 00:13:12.300622 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:12.319172 kubelet[3041]: E0314 00:13:12.318963 3041 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:12.609791 kubelet[3041]: I0314 00:13:12.609587 3041 apiserver.go:52] "Watching apiserver" Mar 14 00:13:12.641726 kubelet[3041]: I0314 00:13:12.641667 3041 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:13:14.528167 systemd[1]: Reloading requested from client PID 3423 ('systemctl') (unit session-7.scope)... Mar 14 00:13:14.528201 systemd[1]: Reloading... Mar 14 00:13:14.732110 zram_generator::config[3475]: No configuration found. Mar 14 00:13:14.992943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:15.192164 kubelet[3041]: I0314 00:13:15.191056 3041 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:15.197740 systemd[1]: Reloading finished in 668 ms. Mar 14 00:13:15.258252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:15.276698 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:13:15.278419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:15.289217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:15.693420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:15.705744 (kubelet)[3533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:13:15.806633 kubelet[3533]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:15.807891 kubelet[3533]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:13:15.807891 kubelet[3533]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:15.807891 kubelet[3533]: I0314 00:13:15.807418 3533 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:13:15.826064 kubelet[3533]: I0314 00:13:15.824190 3533 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:13:15.826064 kubelet[3533]: I0314 00:13:15.824233 3533 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:13:15.826064 kubelet[3533]: I0314 00:13:15.824661 3533 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:13:15.827658 kubelet[3533]: I0314 00:13:15.827592 3533 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:13:15.845651 kubelet[3533]: I0314 00:13:15.845565 3533 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:13:15.854684 kubelet[3533]: E0314 00:13:15.854609 3533 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:13:15.854684 kubelet[3533]: I0314 00:13:15.854677 3533 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:13:15.867440 kubelet[3533]: I0314 00:13:15.862240 3533 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:13:15.867440 kubelet[3533]: I0314 00:13:15.863204 3533 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:13:15.867440 kubelet[3533]: I0314 00:13:15.863247 3533 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:13:15.867440 kubelet[3533]: I0314 00:13:15.863492 3533 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:13:15.866992 sudo[3547]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:13:15.868394 kubelet[3533]: I0314 00:13:15.863509 3533 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:13:15.868394 kubelet[3533]: I0314 00:13:15.863588 3533 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:13:15.868394 kubelet[3533]: I0314 00:13:15.863843 3533 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:13:15.868394 kubelet[3533]: I0314 00:13:15.863869 3533 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:13:15.868394 kubelet[3533]: I0314 00:13:15.863917 3533 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:13:15.868394 kubelet[3533]: I0314 00:13:15.863944 3533 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:13:15.869321 sudo[3547]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:13:15.892007 kubelet[3533]: I0314 00:13:15.891966 3533 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:13:15.896050 kubelet[3533]: I0314 00:13:15.894993 3533 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:13:15.902512 kubelet[3533]: I0314 00:13:15.902480 3533 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:13:15.902700 kubelet[3533]: I0314 00:13:15.902682 3533 server.go:1289] "Started kubelet" Mar 14 00:13:15.904312 kubelet[3533]: I0314 00:13:15.904263 3533 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:13:15.906014 kubelet[3533]: I0314 00:13:15.905981 3533 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:13:15.913883 kubelet[3533]: I0314 00:13:15.913839 3533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:13:15.924379 kubelet[3533]: I0314 00:13:15.924331 3533 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:13:15.927808 kubelet[3533]: I0314 00:13:15.927714 3533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:13:15.930059 kubelet[3533]: I0314 00:13:15.929384 3533 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:13:15.931894 kubelet[3533]: I0314 00:13:15.931861 3533 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:13:15.934560 kubelet[3533]: E0314 00:13:15.934515 3533 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-247\" not found" Mar 14 00:13:15.936837 kubelet[3533]: I0314 00:13:15.936788 3533 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:13:15.946148 kubelet[3533]: I0314 00:13:15.946007 3533 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:13:15.949785 kubelet[3533]: I0314 00:13:15.949670 3533 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:13:15.951083 kubelet[3533]: I0314 00:13:15.950402 3533 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:13:15.959838 kubelet[3533]: I0314 00:13:15.959499 3533 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:13:16.019976 kubelet[3533]: I0314 00:13:16.019904 3533 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:13:16.034317 kubelet[3533]: I0314 00:13:16.034271 3533 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:13:16.035228 kubelet[3533]: I0314 00:13:16.034519 3533 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:13:16.035228 kubelet[3533]: I0314 00:13:16.034567 3533 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:13:16.035228 kubelet[3533]: I0314 00:13:16.034585 3533 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:13:16.035228 kubelet[3533]: E0314 00:13:16.034677 3533 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:13:16.134962 kubelet[3533]: E0314 00:13:16.134912 3533 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220065 3533 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220103 3533 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220143 3533 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220391 3533 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220414 3533 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220452 3533 policy_none.go:49] "None policy: Start" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220474 3533 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220501 3533 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:13:16.222232 kubelet[3533]: I0314 00:13:16.220703 3533 state_mem.go:75] "Updated machine memory state" Mar 14 00:13:16.225047 kubelet[3533]: E0314 00:13:16.224893 3533 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:13:16.225526 kubelet[3533]: I0314 00:13:16.225489 3533 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:13:16.225756 kubelet[3533]: I0314 00:13:16.225695 3533 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:13:16.230161 kubelet[3533]: I0314 00:13:16.230113 3533 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:13:16.240639 kubelet[3533]: E0314 00:13:16.240567 3533 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:13:16.339083 kubelet[3533]: I0314 00:13:16.338388 3533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:16.339347 kubelet[3533]: I0314 00:13:16.338388 3533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:16.343263 kubelet[3533]: I0314 00:13:16.343217 3533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-247" Mar 14 00:13:16.352058 kubelet[3533]: I0314 00:13:16.350958 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b1a2b988bfa7ac0c3a3e2d33e17c2d4-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-247\" (UID: \"2b1a2b988bfa7ac0c3a3e2d33e17c2d4\") " pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:16.352058 kubelet[3533]: I0314 00:13:16.351242 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:16.352058 kubelet[3533]: I0314 00:13:16.351314 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:16.352058 kubelet[3533]: I0314 00:13:16.351355 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:16.352058 kubelet[3533]: I0314 00:13:16.351392 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c122da292975f7c5b4c8b9bd6ee2c07-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-247\" (UID: \"7c122da292975f7c5b4c8b9bd6ee2c07\") " pod="kube-system/kube-scheduler-ip-172-31-24-247" Mar 14 00:13:16.352442 kubelet[3533]: I0314 00:13:16.351430 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b1a2b988bfa7ac0c3a3e2d33e17c2d4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-247\" (UID: \"2b1a2b988bfa7ac0c3a3e2d33e17c2d4\") " pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:16.352442 kubelet[3533]: I0314 00:13:16.351468 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:16.352442 kubelet[3533]: I0314 00:13:16.351508 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e2b0d10ecb1403d9e96074120f3f742-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-247\" (UID: \"6e2b0d10ecb1403d9e96074120f3f742\") " pod="kube-system/kube-controller-manager-ip-172-31-24-247" Mar 14 00:13:16.352442 kubelet[3533]: I0314 00:13:16.351745 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b1a2b988bfa7ac0c3a3e2d33e17c2d4-ca-certs\") pod \"kube-apiserver-ip-172-31-24-247\" (UID: \"2b1a2b988bfa7ac0c3a3e2d33e17c2d4\") " pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:16.355883 kubelet[3533]: E0314 00:13:16.352977 3533 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-247\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:16.356882 kubelet[3533]: I0314 00:13:16.356826 3533 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-247" Mar 14 00:13:16.376595 kubelet[3533]: I0314 00:13:16.376519 3533 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-247" Mar 14 00:13:16.376728 kubelet[3533]: I0314 00:13:16.376654 3533 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-247" Mar 14 00:13:16.872247 kubelet[3533]: I0314 00:13:16.871658 3533 apiserver.go:52] "Watching apiserver" Mar 14 00:13:16.899461 sudo[3547]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:16.940098 kubelet[3533]: I0314 00:13:16.940007 3533 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:13:17.116062 kubelet[3533]: I0314 00:13:17.114692 3533 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:17.132600 kubelet[3533]: E0314 00:13:17.131692 3533 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-247\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-247" Mar 14 00:13:17.225069 kubelet[3533]: I0314 00:13:17.223423 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-247" podStartSLOduration=2.223400813 podStartE2EDuration="2.223400813s" podCreationTimestamp="2026-03-14 00:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:17.221828129 +0000 UTC m=+1.503983277" watchObservedRunningTime="2026-03-14 00:13:17.223400813 +0000 UTC m=+1.505555937" Mar 14 00:13:17.296507 kubelet[3533]: I0314 00:13:17.296133 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-247" podStartSLOduration=1.296109857 podStartE2EDuration="1.296109857s" podCreationTimestamp="2026-03-14 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:17.276463301 +0000 UTC m=+1.558618461" watchObservedRunningTime="2026-03-14 00:13:17.296109857 +0000 UTC m=+1.578265029" Mar 14 00:13:17.323091 kubelet[3533]: I0314 00:13:17.321299 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-247" podStartSLOduration=1.321275765 podStartE2EDuration="1.321275765s" podCreationTimestamp="2026-03-14 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:17.299412389 +0000 UTC m=+1.581567537" watchObservedRunningTime="2026-03-14 00:13:17.321275765 +0000 UTC m=+1.603430901" Mar 14 00:13:18.962167 kubelet[3533]: I0314 00:13:18.961402 3533 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:13:18.964681 containerd[2131]: time="2026-03-14T00:13:18.964449561Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:13:18.966251 kubelet[3533]: I0314 00:13:18.964900 3533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:13:20.004538 kubelet[3533]: E0314 00:13:20.004447 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-24-247\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-247' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Mar 14 00:13:20.013295 kubelet[3533]: I0314 00:13:20.010925 3533 status_manager.go:895] "Failed to get status for pod" podUID="8cdfc83a-ba59-4244-99af-65a449dc891c" pod="kube-system/kube-proxy-649js" err="pods \"kube-proxy-649js\" is forbidden: User \"system:node:ip-172-31-24-247\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-247' and this object" Mar 14 00:13:20.013295 kubelet[3533]: E0314 00:13:20.011131 3533 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-24-247\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-247' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Mar 14 00:13:20.090906 kubelet[3533]: I0314 00:13:20.088587 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-hubble-tls\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.090906 kubelet[3533]: I0314 00:13:20.088653 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-lib-modules\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.090906 kubelet[3533]: I0314 00:13:20.088695 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-config-path\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.090906 kubelet[3533]: I0314 00:13:20.088737 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-net\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.090906 kubelet[3533]: I0314 00:13:20.088794 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-etc-cni-netd\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.090906 kubelet[3533]: I0314 00:13:20.088836 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-bpf-maps\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.091552 kubelet[3533]: I0314 00:13:20.088886 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-hostproc\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.091552 kubelet[3533]: I0314 00:13:20.088925 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cni-path\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.091552 kubelet[3533]: I0314 00:13:20.088968 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-xtables-lock\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.091552 kubelet[3533]: I0314 00:13:20.089003 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2128d7a-5792-4da1-af5d-caf312b35cca-clustermesh-secrets\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.091552 kubelet[3533]: I0314 00:13:20.089074 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-run\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.091552 kubelet[3533]: I0314 00:13:20.089125 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-cgroup\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.091849 kubelet[3533]: I0314 00:13:20.089164 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-kernel\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.134005 sudo[2484]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:20.192530 kubelet[3533]: I0314 00:13:20.190264 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dsd\" (UniqueName: \"kubernetes.io/projected/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-kube-api-access-q4dsd\") pod \"cilium-operator-6c4d7847fc-tc7kd\" (UID: \"e4d379db-3d6e-46e7-8fbe-6ee3981918d5\") " pod="kube-system/cilium-operator-6c4d7847fc-tc7kd" Mar 14 00:13:20.192530 kubelet[3533]: I0314 00:13:20.190397 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tc7kd\" (UID: \"e4d379db-3d6e-46e7-8fbe-6ee3981918d5\") " pod="kube-system/cilium-operator-6c4d7847fc-tc7kd" Mar 14 00:13:20.192530 kubelet[3533]: I0314 00:13:20.190537 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9kw5\" (UniqueName: \"kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-kube-api-access-x9kw5\") pod \"cilium-mjrmp\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " pod="kube-system/cilium-mjrmp" Mar 14 00:13:20.192530 kubelet[3533]: I0314 00:13:20.190937 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8cdfc83a-ba59-4244-99af-65a449dc891c-kube-proxy\") pod \"kube-proxy-649js\" (UID: \"8cdfc83a-ba59-4244-99af-65a449dc891c\") " pod="kube-system/kube-proxy-649js" Mar 14 00:13:20.192530 kubelet[3533]: I0314 00:13:20.190984 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cdfc83a-ba59-4244-99af-65a449dc891c-xtables-lock\") pod \"kube-proxy-649js\" (UID: \"8cdfc83a-ba59-4244-99af-65a449dc891c\") " pod="kube-system/kube-proxy-649js" Mar 14 00:13:20.192934 kubelet[3533]: I0314 00:13:20.191021 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cdfc83a-ba59-4244-99af-65a449dc891c-lib-modules\") pod \"kube-proxy-649js\" (UID: \"8cdfc83a-ba59-4244-99af-65a449dc891c\") " pod="kube-system/kube-proxy-649js" Mar 14 00:13:20.192934 kubelet[3533]: I0314 00:13:20.191583 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh5f6\" (UniqueName: \"kubernetes.io/projected/8cdfc83a-ba59-4244-99af-65a449dc891c-kube-api-access-mh5f6\") pod \"kube-proxy-649js\" (UID: \"8cdfc83a-ba59-4244-99af-65a449dc891c\") " pod="kube-system/kube-proxy-649js" Mar 14 00:13:20.224561 sshd[2480]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:20.266319 systemd[1]: sshd@6-172.31.24.247:22-68.220.241.50:33800.service: Deactivated successfully. Mar 14 00:13:20.266782 systemd-logind[2105]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:13:20.279785 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:13:20.287735 systemd-logind[2105]: Removed session 7. Mar 14 00:13:20.975566 containerd[2131]: time="2026-03-14T00:13:20.975323219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjrmp,Uid:d2128d7a-5792-4da1-af5d-caf312b35cca,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:21.026065 containerd[2131]: time="2026-03-14T00:13:21.025470883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:21.026065 containerd[2131]: time="2026-03-14T00:13:21.025560319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:21.026065 containerd[2131]: time="2026-03-14T00:13:21.025585663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.026065 containerd[2131]: time="2026-03-14T00:13:21.025739551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.056937 containerd[2131]: time="2026-03-14T00:13:21.056858948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tc7kd,Uid:e4d379db-3d6e-46e7-8fbe-6ee3981918d5,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:21.116136 containerd[2131]: time="2026-03-14T00:13:21.115546676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjrmp,Uid:d2128d7a-5792-4da1-af5d-caf312b35cca,Namespace:kube-system,Attempt:0,} returns sandbox id \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\"" Mar 14 00:13:21.132504 containerd[2131]: time="2026-03-14T00:13:21.132432296Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:13:21.157146 containerd[2131]: time="2026-03-14T00:13:21.155056280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:21.157146 containerd[2131]: time="2026-03-14T00:13:21.156663020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:21.157146 containerd[2131]: time="2026-03-14T00:13:21.156695948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.157146 containerd[2131]: time="2026-03-14T00:13:21.156950528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.298395 kubelet[3533]: E0314 00:13:21.298335 3533 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 14 00:13:21.299108 kubelet[3533]: E0314 00:13:21.298478 3533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8cdfc83a-ba59-4244-99af-65a449dc891c-kube-proxy podName:8cdfc83a-ba59-4244-99af-65a449dc891c nodeName:}" failed. No retries permitted until 2026-03-14 00:13:21.798444665 +0000 UTC m=+6.080599813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8cdfc83a-ba59-4244-99af-65a449dc891c-kube-proxy") pod "kube-proxy-649js" (UID: "8cdfc83a-ba59-4244-99af-65a449dc891c") : failed to sync configmap cache: timed out waiting for the condition Mar 14 00:13:21.351842 containerd[2131]: time="2026-03-14T00:13:21.351784845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tc7kd,Uid:e4d379db-3d6e-46e7-8fbe-6ee3981918d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\"" Mar 14 00:13:22.111931 containerd[2131]: time="2026-03-14T00:13:22.111674745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-649js,Uid:8cdfc83a-ba59-4244-99af-65a449dc891c,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:22.159202 containerd[2131]: time="2026-03-14T00:13:22.156923697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:22.159202 containerd[2131]: time="2026-03-14T00:13:22.157069077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:22.159202 containerd[2131]: time="2026-03-14T00:13:22.157155969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:22.165317 containerd[2131]: time="2026-03-14T00:13:22.161508465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:22.291669 containerd[2131]: time="2026-03-14T00:13:22.291594130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-649js,Uid:8cdfc83a-ba59-4244-99af-65a449dc891c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bbee404c8305dbec8b0f0bbbb9535d393cdf41c461a9bb9c356948a7bbdc967\"" Mar 14 00:13:22.302775 containerd[2131]: time="2026-03-14T00:13:22.302718526Z" level=info msg="CreateContainer within sandbox \"1bbee404c8305dbec8b0f0bbbb9535d393cdf41c461a9bb9c356948a7bbdc967\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:13:22.342000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216545713.mount: Deactivated successfully. Mar 14 00:13:22.365736 containerd[2131]: time="2026-03-14T00:13:22.364811194Z" level=info msg="CreateContainer within sandbox \"1bbee404c8305dbec8b0f0bbbb9535d393cdf41c461a9bb9c356948a7bbdc967\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8957f77ee0dc4f6fb2fe3c09b4d96e95775e5ae6f532b4ccd2197f461c17ca91\"" Mar 14 00:13:22.374054 containerd[2131]: time="2026-03-14T00:13:22.373375330Z" level=info msg="StartContainer for \"8957f77ee0dc4f6fb2fe3c09b4d96e95775e5ae6f532b4ccd2197f461c17ca91\"" Mar 14 00:13:22.546190 containerd[2131]: time="2026-03-14T00:13:22.543460715Z" level=info msg="StartContainer for \"8957f77ee0dc4f6fb2fe3c09b4d96e95775e5ae6f532b4ccd2197f461c17ca91\" returns successfully" Mar 14 00:13:23.224656 systemd[1]: run-containerd-runc-k8s.io-8957f77ee0dc4f6fb2fe3c09b4d96e95775e5ae6f532b4ccd2197f461c17ca91-runc.vsP0Xl.mount: Deactivated successfully. Mar 14 00:13:27.627612 kubelet[3533]: I0314 00:13:27.626183 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-649js" podStartSLOduration=8.626159884 podStartE2EDuration="8.626159884s" podCreationTimestamp="2026-03-14 00:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:23.247516882 +0000 UTC m=+7.529672030" watchObservedRunningTime="2026-03-14 00:13:27.626159884 +0000 UTC m=+11.908315020" Mar 14 00:13:27.852722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730367106.mount: Deactivated successfully. Mar 14 00:13:30.648383 containerd[2131]: time="2026-03-14T00:13:30.648315295Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:30.650900 containerd[2131]: time="2026-03-14T00:13:30.650836927Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 14 00:13:30.653415 containerd[2131]: time="2026-03-14T00:13:30.653312383Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:30.657285 containerd[2131]: time="2026-03-14T00:13:30.656823667Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.524312927s" Mar 14 00:13:30.657285 containerd[2131]: time="2026-03-14T00:13:30.656895355Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 14 00:13:30.661226 containerd[2131]: time="2026-03-14T00:13:30.660771475Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:13:30.666926 containerd[2131]: time="2026-03-14T00:13:30.666871471Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:13:30.695087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143494301.mount: Deactivated successfully. Mar 14 00:13:30.700386 containerd[2131]: time="2026-03-14T00:13:30.700174399Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\"" Mar 14 00:13:30.702150 containerd[2131]: time="2026-03-14T00:13:30.701628127Z" level=info msg="StartContainer for \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\"" Mar 14 00:13:30.812769 containerd[2131]: time="2026-03-14T00:13:30.812551340Z" level=info msg="StartContainer for \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\" returns successfully" Mar 14 00:13:31.685593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538-rootfs.mount: Deactivated successfully. Mar 14 00:13:33.382832 containerd[2131]: time="2026-03-14T00:13:33.382543953Z" level=info msg="shim disconnected" id=9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538 namespace=k8s.io Mar 14 00:13:33.382832 containerd[2131]: time="2026-03-14T00:13:33.382617585Z" level=warning msg="cleaning up after shim disconnected" id=9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538 namespace=k8s.io Mar 14 00:13:33.382832 containerd[2131]: time="2026-03-14T00:13:33.382640313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:33.586372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173525628.mount: Deactivated successfully. Mar 14 00:13:34.268073 containerd[2131]: time="2026-03-14T00:13:34.266919993Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:13:34.296605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630153228.mount: Deactivated successfully. Mar 14 00:13:34.305566 containerd[2131]: time="2026-03-14T00:13:34.305467269Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\"" Mar 14 00:13:34.306430 containerd[2131]: time="2026-03-14T00:13:34.306382917Z" level=info msg="StartContainer for \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\"" Mar 14 00:13:34.406509 containerd[2131]: time="2026-03-14T00:13:34.406455406Z" level=info msg="StartContainer for \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\" returns successfully" Mar 14 00:13:34.436858 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:13:34.439913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:34.440502 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:34.458307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:34.514456 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:34.538153 containerd[2131]: time="2026-03-14T00:13:34.537910283Z" level=info msg="shim disconnected" id=17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837 namespace=k8s.io Mar 14 00:13:34.538153 containerd[2131]: time="2026-03-14T00:13:34.537988091Z" level=warning msg="cleaning up after shim disconnected" id=17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837 namespace=k8s.io Mar 14 00:13:34.538153 containerd[2131]: time="2026-03-14T00:13:34.538012391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:34.575423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837-rootfs.mount: Deactivated successfully. Mar 14 00:13:35.113522 containerd[2131]: time="2026-03-14T00:13:35.113437137Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:35.115938 containerd[2131]: time="2026-03-14T00:13:35.115867521Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 14 00:13:35.117371 containerd[2131]: time="2026-03-14T00:13:35.117274869Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:35.120705 containerd[2131]: time="2026-03-14T00:13:35.120501837Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.45966393s" Mar 14 00:13:35.120705 containerd[2131]: time="2026-03-14T00:13:35.120571557Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 14 00:13:35.128932 containerd[2131]: time="2026-03-14T00:13:35.128855745Z" level=info msg="CreateContainer within sandbox \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:13:35.148571 containerd[2131]: time="2026-03-14T00:13:35.148288102Z" level=info msg="CreateContainer within sandbox \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\"" Mar 14 00:13:35.152773 containerd[2131]: time="2026-03-14T00:13:35.152407318Z" level=info msg="StartContainer for \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\"" Mar 14 00:13:35.293072 containerd[2131]: time="2026-03-14T00:13:35.289427194Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:13:35.305277 containerd[2131]: time="2026-03-14T00:13:35.304010602Z" level=info msg="StartContainer for \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\" returns successfully" Mar 14 00:13:35.335746 containerd[2131]: time="2026-03-14T00:13:35.335391886Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\"" Mar 14 00:13:35.336769 containerd[2131]: time="2026-03-14T00:13:35.336673691Z" level=info msg="StartContainer for \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\"" Mar 14 00:13:35.476219 containerd[2131]: time="2026-03-14T00:13:35.475475027Z" level=info msg="StartContainer for \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\" returns successfully" Mar 14 00:13:35.622057 containerd[2131]: time="2026-03-14T00:13:35.621919272Z" level=info msg="shim disconnected" id=632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a namespace=k8s.io Mar 14 00:13:35.622057 containerd[2131]: time="2026-03-14T00:13:35.622000668Z" level=warning msg="cleaning up after shim disconnected" id=632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a namespace=k8s.io Mar 14 00:13:35.622057 containerd[2131]: time="2026-03-14T00:13:35.622022676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:36.314675 containerd[2131]: time="2026-03-14T00:13:36.313489187Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:13:36.501409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909925847.mount: Deactivated successfully. Mar 14 00:13:36.531801 containerd[2131]: time="2026-03-14T00:13:36.529472028Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\"" Mar 14 00:13:36.539998 containerd[2131]: time="2026-03-14T00:13:36.533977884Z" level=info msg="StartContainer for \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\"" Mar 14 00:13:36.886148 containerd[2131]: time="2026-03-14T00:13:36.884307230Z" level=info msg="StartContainer for \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\" returns successfully" Mar 14 00:13:36.990933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3-rootfs.mount: Deactivated successfully. Mar 14 00:13:37.000022 containerd[2131]: time="2026-03-14T00:13:36.999901815Z" level=info msg="shim disconnected" id=1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3 namespace=k8s.io Mar 14 00:13:37.000022 containerd[2131]: time="2026-03-14T00:13:36.999989379Z" level=warning msg="cleaning up after shim disconnected" id=1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3 namespace=k8s.io Mar 14 00:13:37.000022 containerd[2131]: time="2026-03-14T00:13:37.000012599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:37.335908 containerd[2131]: time="2026-03-14T00:13:37.334739604Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:13:37.367408 kubelet[3533]: I0314 00:13:37.364924 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tc7kd" podStartSLOduration=3.597140089 podStartE2EDuration="17.364773409s" podCreationTimestamp="2026-03-14 00:13:20 +0000 UTC" firstStartedPulling="2026-03-14 00:13:21.354880449 +0000 UTC m=+5.637035585" lastFinishedPulling="2026-03-14 00:13:35.122513769 +0000 UTC m=+19.404668905" observedRunningTime="2026-03-14 00:13:36.487674636 +0000 UTC m=+20.769829772" watchObservedRunningTime="2026-03-14 00:13:37.364773409 +0000 UTC m=+21.646928569" Mar 14 00:13:37.370000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847885026.mount: Deactivated successfully. Mar 14 00:13:37.371820 containerd[2131]: time="2026-03-14T00:13:37.371731693Z" level=info msg="CreateContainer within sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\"" Mar 14 00:13:37.375546 containerd[2131]: time="2026-03-14T00:13:37.375448549Z" level=info msg="StartContainer for \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\"" Mar 14 00:13:37.477268 containerd[2131]: time="2026-03-14T00:13:37.477187717Z" level=info msg="StartContainer for \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\" returns successfully" Mar 14 00:13:37.613162 kubelet[3533]: I0314 00:13:37.612999 3533 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:13:37.750643 kubelet[3533]: I0314 00:13:37.750541 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de9bdd27-a3eb-41f7-bd85-d62470d9254b-config-volume\") pod \"coredns-674b8bbfcf-vcpc8\" (UID: \"de9bdd27-a3eb-41f7-bd85-d62470d9254b\") " pod="kube-system/coredns-674b8bbfcf-vcpc8" Mar 14 00:13:37.750643 kubelet[3533]: I0314 00:13:37.750625 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv4vv\" (UniqueName: \"kubernetes.io/projected/de9bdd27-a3eb-41f7-bd85-d62470d9254b-kube-api-access-vv4vv\") pod \"coredns-674b8bbfcf-vcpc8\" (UID: \"de9bdd27-a3eb-41f7-bd85-d62470d9254b\") " pod="kube-system/coredns-674b8bbfcf-vcpc8" Mar 14 00:13:37.751953 kubelet[3533]: I0314 00:13:37.750668 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4gd\" (UniqueName: \"kubernetes.io/projected/d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1-kube-api-access-rr4gd\") pod \"coredns-674b8bbfcf-5scdf\" (UID: \"d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1\") " pod="kube-system/coredns-674b8bbfcf-5scdf" Mar 14 00:13:37.751953 kubelet[3533]: I0314 00:13:37.750720 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1-config-volume\") pod \"coredns-674b8bbfcf-5scdf\" (UID: \"d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1\") " pod="kube-system/coredns-674b8bbfcf-5scdf" Mar 14 00:13:38.001446 containerd[2131]: time="2026-03-14T00:13:38.001221024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5scdf,Uid:d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:38.022947 containerd[2131]: time="2026-03-14T00:13:38.022352436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vcpc8,Uid:de9bdd27-a3eb-41f7-bd85-d62470d9254b,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:40.827676 (udev-worker)[4350]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:40.829850 (udev-worker)[4384]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:40.841370 systemd-networkd[1696]: cilium_host: Link UP Mar 14 00:13:40.841878 systemd-networkd[1696]: cilium_net: Link UP Mar 14 00:13:40.841886 systemd-networkd[1696]: cilium_net: Gained carrier Mar 14 00:13:40.845835 systemd-networkd[1696]: cilium_host: Gained carrier Mar 14 00:13:40.847173 systemd-networkd[1696]: cilium_host: Gained IPv6LL Mar 14 00:13:41.023638 (udev-worker)[4398]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:41.043409 systemd-networkd[1696]: cilium_vxlan: Link UP Mar 14 00:13:41.043424 systemd-networkd[1696]: cilium_vxlan: Gained carrier Mar 14 00:13:41.150629 systemd-networkd[1696]: cilium_net: Gained IPv6LL Mar 14 00:13:41.764078 kernel: NET: Registered PF_ALG protocol family Mar 14 00:13:43.070378 systemd-networkd[1696]: cilium_vxlan: Gained IPv6LL Mar 14 00:13:43.275754 systemd-networkd[1696]: lxc_health: Link UP Mar 14 00:13:43.285469 systemd-networkd[1696]: lxc_health: Gained carrier Mar 14 00:13:43.288876 (udev-worker)[4397]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:43.670010 systemd-networkd[1696]: lxcc24c4bd2a0b9: Link UP Mar 14 00:13:43.687674 kernel: eth0: renamed from tmp215c2 Mar 14 00:13:43.693054 systemd-networkd[1696]: lxcc24c4bd2a0b9: Gained carrier Mar 14 00:13:43.740071 systemd-networkd[1696]: lxc64fea6aefd28: Link UP Mar 14 00:13:43.758147 kernel: eth0: renamed from tmp0ffe9 Mar 14 00:13:43.774264 systemd-networkd[1696]: lxc64fea6aefd28: Gained carrier Mar 14 00:13:45.019081 kubelet[3533]: I0314 00:13:45.014783 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mjrmp" podStartSLOduration=16.483415728 podStartE2EDuration="26.014762947s" podCreationTimestamp="2026-03-14 00:13:19 +0000 UTC" firstStartedPulling="2026-03-14 00:13:21.127859312 +0000 UTC m=+5.410014436" lastFinishedPulling="2026-03-14 00:13:30.659206519 +0000 UTC m=+14.941361655" observedRunningTime="2026-03-14 00:13:38.527323202 +0000 UTC m=+22.809478362" watchObservedRunningTime="2026-03-14 00:13:45.014762947 +0000 UTC m=+29.296918071" Mar 14 00:13:45.119687 systemd-networkd[1696]: lxc_health: Gained IPv6LL Mar 14 00:13:45.310281 systemd-networkd[1696]: lxcc24c4bd2a0b9: Gained IPv6LL Mar 14 00:13:45.502313 systemd-networkd[1696]: lxc64fea6aefd28: Gained IPv6LL Mar 14 00:13:48.502601 ntpd[2085]: Listen normally on 6 cilium_host 192.168.0.145:123 Mar 14 00:13:48.502742 ntpd[2085]: Listen normally on 7 cilium_net [fe80::e05a:6aff:fe0c:3ff6%4]:123 Mar 14 00:13:48.503328 ntpd[2085]: 14 Mar 00:13:48 ntpd[2085]: Listen normally on 6 cilium_host 192.168.0.145:123 Mar 14 00:13:48.503328 ntpd[2085]: 14 Mar 00:13:48 ntpd[2085]: Listen normally on 7 cilium_net [fe80::e05a:6aff:fe0c:3ff6%4]:123 Mar 14 00:13:48.503328 ntpd[2085]: 14 Mar 00:13:48 ntpd[2085]: Listen normally on 8 cilium_host [fe80::50a6:6bff:fe59:c2f6%5]:123 Mar 14 00:13:48.503328 ntpd[2085]: 14 Mar 00:13:48 ntpd[2085]: Listen normally on 9 cilium_vxlan [fe80::f04d:1aff:fe95:c093%6]:123 Mar 14 00:13:48.503328 ntpd[2085]: 14 Mar 00:13:48 ntpd[2085]: Listen normally on 10 lxc_health [fe80::3401:90ff:fe4a:1abe%8]:123 Mar 14 00:13:48.503328 ntpd[2085]: 14 Mar 00:13:48 ntpd[2085]: Listen normally on 11 lxcc24c4bd2a0b9 [fe80::c8ef:deff:fe96:71c2%10]:123 Mar 14 00:13:48.503328 ntpd[2085]: 14 Mar 00:13:48 ntpd[2085]: Listen normally on 12 lxc64fea6aefd28 [fe80::3431:89ff:fe45:9154%12]:123 Mar 14 00:13:48.502827 ntpd[2085]: Listen normally on 8 cilium_host [fe80::50a6:6bff:fe59:c2f6%5]:123 Mar 14 00:13:48.502897 ntpd[2085]: Listen normally on 9 cilium_vxlan [fe80::f04d:1aff:fe95:c093%6]:123 Mar 14 00:13:48.502965 ntpd[2085]: Listen normally on 10 lxc_health [fe80::3401:90ff:fe4a:1abe%8]:123 Mar 14 00:13:48.503080 ntpd[2085]: Listen normally on 11 lxcc24c4bd2a0b9 [fe80::c8ef:deff:fe96:71c2%10]:123 Mar 14 00:13:48.503172 ntpd[2085]: Listen normally on 12 lxc64fea6aefd28 [fe80::3431:89ff:fe45:9154%12]:123 Mar 14 00:13:48.677317 kubelet[3533]: I0314 00:13:48.677246 3533 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:52.548645 containerd[2131]: time="2026-03-14T00:13:52.548482360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:52.549786 containerd[2131]: time="2026-03-14T00:13:52.548810272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:52.549786 containerd[2131]: time="2026-03-14T00:13:52.548978764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:52.553073 containerd[2131]: time="2026-03-14T00:13:52.550727800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:52.686047 containerd[2131]: time="2026-03-14T00:13:52.685456385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:52.686047 containerd[2131]: time="2026-03-14T00:13:52.685575005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:52.686047 containerd[2131]: time="2026-03-14T00:13:52.685612865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:52.686047 containerd[2131]: time="2026-03-14T00:13:52.685803701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:52.742425 containerd[2131]: time="2026-03-14T00:13:52.742209617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vcpc8,Uid:de9bdd27-a3eb-41f7-bd85-d62470d9254b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ffe94516414a5a595b843a9c57885d612dedff89d6c3cc2ab0371dd2d379acd\"" Mar 14 00:13:52.752717 containerd[2131]: time="2026-03-14T00:13:52.752644445Z" level=info msg="CreateContainer within sandbox \"0ffe94516414a5a595b843a9c57885d612dedff89d6c3cc2ab0371dd2d379acd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:13:52.794371 containerd[2131]: time="2026-03-14T00:13:52.793010297Z" level=info msg="CreateContainer within sandbox \"0ffe94516414a5a595b843a9c57885d612dedff89d6c3cc2ab0371dd2d379acd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"281462f1a86ee2ae06e75e7491421a29eac5da1da0d7f7a2bb01ff96e53eb218\"" Mar 14 00:13:52.798553 containerd[2131]: time="2026-03-14T00:13:52.798488705Z" level=info msg="StartContainer for \"281462f1a86ee2ae06e75e7491421a29eac5da1da0d7f7a2bb01ff96e53eb218\"" Mar 14 00:13:52.896194 containerd[2131]: time="2026-03-14T00:13:52.895632438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5scdf,Uid:d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"215c2781dccfc36f8a3d531798f2aa7442fc1327765b79dd3e1efa7820331e8d\"" Mar 14 00:13:52.913310 containerd[2131]: time="2026-03-14T00:13:52.912282342Z" level=info msg="CreateContainer within sandbox \"215c2781dccfc36f8a3d531798f2aa7442fc1327765b79dd3e1efa7820331e8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:13:52.977127 containerd[2131]: time="2026-03-14T00:13:52.973195194Z" level=info msg="CreateContainer within sandbox \"215c2781dccfc36f8a3d531798f2aa7442fc1327765b79dd3e1efa7820331e8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"236a8c778ae96a9fc50a710a91963d2b331ae525077be09e994d6281fdbf34ec\"" Mar 14 00:13:52.977127 containerd[2131]: time="2026-03-14T00:13:52.975427230Z" level=info msg="StartContainer for \"236a8c778ae96a9fc50a710a91963d2b331ae525077be09e994d6281fdbf34ec\"" Mar 14 00:13:53.039325 containerd[2131]: time="2026-03-14T00:13:53.039267758Z" level=info msg="StartContainer for \"281462f1a86ee2ae06e75e7491421a29eac5da1da0d7f7a2bb01ff96e53eb218\" returns successfully" Mar 14 00:13:53.146005 containerd[2131]: time="2026-03-14T00:13:53.145766103Z" level=info msg="StartContainer for \"236a8c778ae96a9fc50a710a91963d2b331ae525077be09e994d6281fdbf34ec\" returns successfully" Mar 14 00:13:53.447351 kubelet[3533]: I0314 00:13:53.447202 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vcpc8" podStartSLOduration=33.447137236 podStartE2EDuration="33.447137236s" podCreationTimestamp="2026-03-14 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:53.439738444 +0000 UTC m=+37.721893592" watchObservedRunningTime="2026-03-14 00:13:53.447137236 +0000 UTC m=+37.729292432" Mar 14 00:13:53.567763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570768928.mount: Deactivated successfully. Mar 14 00:14:03.131499 systemd[1]: Started sshd@7-172.31.24.247:22-68.220.241.50:43504.service - OpenSSH per-connection server daemon (68.220.241.50:43504). Mar 14 00:14:03.667197 sshd[4921]: Accepted publickey for core from 68.220.241.50 port 43504 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:03.669867 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:03.679934 systemd-logind[2105]: New session 8 of user core. Mar 14 00:14:03.690283 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:14:04.188228 sshd[4921]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:04.196911 systemd[1]: sshd@7-172.31.24.247:22-68.220.241.50:43504.service: Deactivated successfully. Mar 14 00:14:04.205729 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:14:04.207974 systemd-logind[2105]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:14:04.210507 systemd-logind[2105]: Removed session 8. Mar 14 00:14:09.265735 systemd[1]: Started sshd@8-172.31.24.247:22-68.220.241.50:43516.service - OpenSSH per-connection server daemon (68.220.241.50:43516). Mar 14 00:14:09.770085 sshd[4938]: Accepted publickey for core from 68.220.241.50 port 43516 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:09.771913 sshd[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:09.781535 systemd-logind[2105]: New session 9 of user core. Mar 14 00:14:09.787578 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:14:10.250390 sshd[4938]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:10.256587 systemd[1]: sshd@8-172.31.24.247:22-68.220.241.50:43516.service: Deactivated successfully. Mar 14 00:14:10.256952 systemd-logind[2105]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:14:10.266378 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:14:10.268992 systemd-logind[2105]: Removed session 9. Mar 14 00:14:15.337554 systemd[1]: Started sshd@9-172.31.24.247:22-68.220.241.50:42254.service - OpenSSH per-connection server daemon (68.220.241.50:42254). Mar 14 00:14:15.853233 sshd[4952]: Accepted publickey for core from 68.220.241.50 port 42254 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:15.856000 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:15.864131 systemd-logind[2105]: New session 10 of user core. Mar 14 00:14:15.871702 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:14:16.349419 sshd[4952]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:16.357645 systemd[1]: sshd@9-172.31.24.247:22-68.220.241.50:42254.service: Deactivated successfully. Mar 14 00:14:16.365731 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:14:16.367903 systemd-logind[2105]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:14:16.370190 systemd-logind[2105]: Removed session 10. Mar 14 00:14:21.435634 systemd[1]: Started sshd@10-172.31.24.247:22-68.220.241.50:42256.service - OpenSSH per-connection server daemon (68.220.241.50:42256). Mar 14 00:14:21.950555 sshd[4968]: Accepted publickey for core from 68.220.241.50 port 42256 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:21.953354 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:21.962889 systemd-logind[2105]: New session 11 of user core. Mar 14 00:14:21.972680 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:14:22.439390 sshd[4968]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:22.447586 systemd[1]: sshd@10-172.31.24.247:22-68.220.241.50:42256.service: Deactivated successfully. Mar 14 00:14:22.454081 systemd-logind[2105]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:14:22.454497 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:14:22.460692 systemd-logind[2105]: Removed session 11. Mar 14 00:14:22.542484 systemd[1]: Started sshd@11-172.31.24.247:22-68.220.241.50:34692.service - OpenSSH per-connection server daemon (68.220.241.50:34692). Mar 14 00:14:23.088087 sshd[4982]: Accepted publickey for core from 68.220.241.50 port 34692 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:23.091449 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:23.100921 systemd-logind[2105]: New session 12 of user core. Mar 14 00:14:23.115591 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:14:23.688401 sshd[4982]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:23.695229 systemd[1]: sshd@11-172.31.24.247:22-68.220.241.50:34692.service: Deactivated successfully. Mar 14 00:14:23.703908 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:14:23.707301 systemd-logind[2105]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:14:23.709728 systemd-logind[2105]: Removed session 12. Mar 14 00:14:23.769250 systemd[1]: Started sshd@12-172.31.24.247:22-68.220.241.50:34702.service - OpenSSH per-connection server daemon (68.220.241.50:34702). Mar 14 00:14:24.283925 sshd[4996]: Accepted publickey for core from 68.220.241.50 port 34702 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:24.286631 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:24.294919 systemd-logind[2105]: New session 13 of user core. Mar 14 00:14:24.304695 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:14:24.755417 sshd[4996]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:24.763615 systemd[1]: sshd@12-172.31.24.247:22-68.220.241.50:34702.service: Deactivated successfully. Mar 14 00:14:24.770330 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:14:24.773518 systemd-logind[2105]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:14:24.775950 systemd-logind[2105]: Removed session 13. Mar 14 00:14:29.841521 systemd[1]: Started sshd@13-172.31.24.247:22-68.220.241.50:34716.service - OpenSSH per-connection server daemon (68.220.241.50:34716). Mar 14 00:14:30.372922 sshd[5014]: Accepted publickey for core from 68.220.241.50 port 34716 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:30.375652 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:30.386198 systemd-logind[2105]: New session 14 of user core. Mar 14 00:14:30.391571 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:14:30.846082 sshd[5014]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:30.853789 systemd[1]: sshd@13-172.31.24.247:22-68.220.241.50:34716.service: Deactivated successfully. Mar 14 00:14:30.862866 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:14:30.865173 systemd-logind[2105]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:14:30.869755 systemd-logind[2105]: Removed session 14. Mar 14 00:14:35.931509 systemd[1]: Started sshd@14-172.31.24.247:22-68.220.241.50:44100.service - OpenSSH per-connection server daemon (68.220.241.50:44100). Mar 14 00:14:36.440087 sshd[5028]: Accepted publickey for core from 68.220.241.50 port 44100 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:36.443382 sshd[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:36.453408 systemd-logind[2105]: New session 15 of user core. Mar 14 00:14:36.458613 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:14:36.921444 sshd[5028]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:36.927875 systemd[1]: sshd@14-172.31.24.247:22-68.220.241.50:44100.service: Deactivated successfully. Mar 14 00:14:36.928171 systemd-logind[2105]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:14:36.936420 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:14:36.939775 systemd-logind[2105]: Removed session 15. Mar 14 00:14:42.009907 systemd[1]: Started sshd@15-172.31.24.247:22-68.220.241.50:44116.service - OpenSSH per-connection server daemon (68.220.241.50:44116). Mar 14 00:14:42.518332 sshd[5043]: Accepted publickey for core from 68.220.241.50 port 44116 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:42.520993 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:42.528489 systemd-logind[2105]: New session 16 of user core. Mar 14 00:14:42.539653 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:14:42.995336 sshd[5043]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:43.003530 systemd[1]: sshd@15-172.31.24.247:22-68.220.241.50:44116.service: Deactivated successfully. Mar 14 00:14:43.010817 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:14:43.012968 systemd-logind[2105]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:14:43.015217 systemd-logind[2105]: Removed session 16. Mar 14 00:14:43.080525 systemd[1]: Started sshd@16-172.31.24.247:22-68.220.241.50:57828.service - OpenSSH per-connection server daemon (68.220.241.50:57828). Mar 14 00:14:43.599533 sshd[5057]: Accepted publickey for core from 68.220.241.50 port 57828 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:43.602449 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:43.610192 systemd-logind[2105]: New session 17 of user core. Mar 14 00:14:43.620695 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:14:44.161722 sshd[5057]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:44.169814 systemd[1]: sshd@16-172.31.24.247:22-68.220.241.50:57828.service: Deactivated successfully. Mar 14 00:14:44.176668 systemd-logind[2105]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:14:44.177448 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:14:44.182895 systemd-logind[2105]: Removed session 17. Mar 14 00:14:44.250547 systemd[1]: Started sshd@17-172.31.24.247:22-68.220.241.50:57840.service - OpenSSH per-connection server daemon (68.220.241.50:57840). Mar 14 00:14:44.761099 sshd[5069]: Accepted publickey for core from 68.220.241.50 port 57840 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:44.763252 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:44.773345 systemd-logind[2105]: New session 18 of user core. Mar 14 00:14:44.782692 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:14:46.088702 sshd[5069]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:46.099445 systemd[1]: sshd@17-172.31.24.247:22-68.220.241.50:57840.service: Deactivated successfully. Mar 14 00:14:46.106511 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:14:46.108243 systemd-logind[2105]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:14:46.113355 systemd-logind[2105]: Removed session 18. Mar 14 00:14:46.187429 systemd[1]: Started sshd@18-172.31.24.247:22-68.220.241.50:57848.service - OpenSSH per-connection server daemon (68.220.241.50:57848). Mar 14 00:14:46.745227 sshd[5088]: Accepted publickey for core from 68.220.241.50 port 57848 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:46.748435 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:46.756807 systemd-logind[2105]: New session 19 of user core. Mar 14 00:14:46.765721 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:14:47.507342 sshd[5088]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:47.512966 systemd[1]: sshd@18-172.31.24.247:22-68.220.241.50:57848.service: Deactivated successfully. Mar 14 00:14:47.521892 systemd-logind[2105]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:14:47.522966 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:14:47.527683 systemd-logind[2105]: Removed session 19. Mar 14 00:14:47.585556 systemd[1]: Started sshd@19-172.31.24.247:22-68.220.241.50:57858.service - OpenSSH per-connection server daemon (68.220.241.50:57858). Mar 14 00:14:48.101621 sshd[5100]: Accepted publickey for core from 68.220.241.50 port 57858 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:48.105265 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:48.115540 systemd-logind[2105]: New session 20 of user core. Mar 14 00:14:48.121578 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:14:48.574299 sshd[5100]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:48.581836 systemd[1]: sshd@19-172.31.24.247:22-68.220.241.50:57858.service: Deactivated successfully. Mar 14 00:14:48.590013 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:14:48.593983 systemd-logind[2105]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:14:48.596166 systemd-logind[2105]: Removed session 20. Mar 14 00:14:53.659572 systemd[1]: Started sshd@20-172.31.24.247:22-68.220.241.50:51198.service - OpenSSH per-connection server daemon (68.220.241.50:51198). Mar 14 00:14:54.162336 sshd[5118]: Accepted publickey for core from 68.220.241.50 port 51198 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:54.165751 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:54.174208 systemd-logind[2105]: New session 21 of user core. Mar 14 00:14:54.182864 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:14:54.631355 sshd[5118]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:54.637512 systemd[1]: sshd@20-172.31.24.247:22-68.220.241.50:51198.service: Deactivated successfully. Mar 14 00:14:54.646491 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:14:54.649933 systemd-logind[2105]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:14:54.651909 systemd-logind[2105]: Removed session 21. Mar 14 00:14:59.718737 systemd[1]: Started sshd@21-172.31.24.247:22-68.220.241.50:51204.service - OpenSSH per-connection server daemon (68.220.241.50:51204). Mar 14 00:15:00.223980 sshd[5132]: Accepted publickey for core from 68.220.241.50 port 51204 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:00.226921 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:00.236309 systemd-logind[2105]: New session 22 of user core. Mar 14 00:15:00.241931 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:15:00.697785 sshd[5132]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:00.706283 systemd-logind[2105]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:15:00.708801 systemd[1]: sshd@21-172.31.24.247:22-68.220.241.50:51204.service: Deactivated successfully. Mar 14 00:15:00.716226 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:15:00.719432 systemd-logind[2105]: Removed session 22. Mar 14 00:15:00.784509 systemd[1]: Started sshd@22-172.31.24.247:22-68.220.241.50:51208.service - OpenSSH per-connection server daemon (68.220.241.50:51208). Mar 14 00:15:01.301099 sshd[5145]: Accepted publickey for core from 68.220.241.50 port 51208 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:01.305442 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:01.314222 systemd-logind[2105]: New session 23 of user core. Mar 14 00:15:01.325713 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:15:04.157543 kubelet[3533]: I0314 00:15:04.157441 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5scdf" podStartSLOduration=104.1574208 podStartE2EDuration="1m44.1574208s" podCreationTimestamp="2026-03-14 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:53.508201937 +0000 UTC m=+37.790357097" watchObservedRunningTime="2026-03-14 00:15:04.1574208 +0000 UTC m=+108.439575936" Mar 14 00:15:04.222302 containerd[2131]: time="2026-03-14T00:15:04.221182872Z" level=info msg="StopContainer for \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\" with timeout 30 (s)" Mar 14 00:15:04.222435 systemd[1]: run-containerd-runc-k8s.io-e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379-runc.snUbFt.mount: Deactivated successfully. Mar 14 00:15:04.229410 containerd[2131]: time="2026-03-14T00:15:04.227135832Z" level=info msg="Stop container \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\" with signal terminated" Mar 14 00:15:04.253800 containerd[2131]: time="2026-03-14T00:15:04.253489812Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:15:04.278878 containerd[2131]: time="2026-03-14T00:15:04.278792316Z" level=info msg="StopContainer for \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\" with timeout 2 (s)" Mar 14 00:15:04.282287 containerd[2131]: time="2026-03-14T00:15:04.280671396Z" level=info msg="Stop container \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\" with signal terminated" Mar 14 00:15:04.309349 systemd-networkd[1696]: lxc_health: Link DOWN Mar 14 00:15:04.309363 systemd-networkd[1696]: lxc_health: Lost carrier Mar 14 00:15:04.361269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064-rootfs.mount: Deactivated successfully. Mar 14 00:15:04.385323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379-rootfs.mount: Deactivated successfully. Mar 14 00:15:04.425117 containerd[2131]: time="2026-03-14T00:15:04.423884365Z" level=info msg="shim disconnected" id=e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379 namespace=k8s.io Mar 14 00:15:04.425117 containerd[2131]: time="2026-03-14T00:15:04.423958717Z" level=warning msg="cleaning up after shim disconnected" id=e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379 namespace=k8s.io Mar 14 00:15:04.425117 containerd[2131]: time="2026-03-14T00:15:04.423978817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:04.426096 containerd[2131]: time="2026-03-14T00:15:04.425812393Z" level=info msg="shim disconnected" id=25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064 namespace=k8s.io Mar 14 00:15:04.426096 containerd[2131]: time="2026-03-14T00:15:04.425893045Z" level=warning msg="cleaning up after shim disconnected" id=25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064 namespace=k8s.io Mar 14 00:15:04.426096 containerd[2131]: time="2026-03-14T00:15:04.425914429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:04.461061 containerd[2131]: time="2026-03-14T00:15:04.460925869Z" level=info msg="StopContainer for \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\" returns successfully" Mar 14 00:15:04.462618 containerd[2131]: time="2026-03-14T00:15:04.462404629Z" level=info msg="StopPodSandbox for \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\"" Mar 14 00:15:04.462618 containerd[2131]: time="2026-03-14T00:15:04.462477529Z" level=info msg="Container to stop \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:15:04.467965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4-shm.mount: Deactivated successfully. Mar 14 00:15:04.471058 containerd[2131]: time="2026-03-14T00:15:04.469620505Z" level=info msg="StopContainer for \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\" returns successfully" Mar 14 00:15:04.471944 containerd[2131]: time="2026-03-14T00:15:04.471824617Z" level=info msg="StopPodSandbox for \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\"" Mar 14 00:15:04.472155 containerd[2131]: time="2026-03-14T00:15:04.472116841Z" level=info msg="Container to stop \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:15:04.472155 containerd[2131]: time="2026-03-14T00:15:04.472151329Z" level=info msg="Container to stop \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:15:04.472427 containerd[2131]: time="2026-03-14T00:15:04.472175089Z" level=info msg="Container to stop \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:15:04.472427 containerd[2131]: time="2026-03-14T00:15:04.472198837Z" level=info msg="Container to stop \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:15:04.472427 containerd[2131]: time="2026-03-14T00:15:04.472222249Z" level=info msg="Container to stop \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:15:04.548257 containerd[2131]: time="2026-03-14T00:15:04.547882346Z" level=info msg="shim disconnected" id=c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4 namespace=k8s.io Mar 14 00:15:04.548257 containerd[2131]: time="2026-03-14T00:15:04.547963562Z" level=warning msg="cleaning up after shim disconnected" id=c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4 namespace=k8s.io Mar 14 00:15:04.548257 containerd[2131]: time="2026-03-14T00:15:04.547984610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:04.561960 containerd[2131]: time="2026-03-14T00:15:04.561586550Z" level=info msg="shim disconnected" id=e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae namespace=k8s.io Mar 14 00:15:04.561960 containerd[2131]: time="2026-03-14T00:15:04.561665378Z" level=warning msg="cleaning up after shim disconnected" id=e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae namespace=k8s.io Mar 14 00:15:04.561960 containerd[2131]: time="2026-03-14T00:15:04.561687266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:04.589273 containerd[2131]: time="2026-03-14T00:15:04.589100522Z" level=info msg="TearDown network for sandbox \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" successfully" Mar 14 00:15:04.589273 containerd[2131]: time="2026-03-14T00:15:04.589200410Z" level=info msg="StopPodSandbox for \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" returns successfully" Mar 14 00:15:04.598012 containerd[2131]: time="2026-03-14T00:15:04.597555182Z" level=info msg="TearDown network for sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" successfully" Mar 14 00:15:04.598012 containerd[2131]: time="2026-03-14T00:15:04.597604778Z" level=info msg="StopPodSandbox for \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" returns successfully" Mar 14 00:15:04.626223 kubelet[3533]: I0314 00:15:04.626161 3533 scope.go:117] "RemoveContainer" containerID="e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379" Mar 14 00:15:04.632994 containerd[2131]: time="2026-03-14T00:15:04.632943818Z" level=info msg="RemoveContainer for \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\"" Mar 14 00:15:04.641563 containerd[2131]: time="2026-03-14T00:15:04.641334638Z" level=info msg="RemoveContainer for \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\" returns successfully" Mar 14 00:15:04.642355 kubelet[3533]: I0314 00:15:04.642086 3533 scope.go:117] "RemoveContainer" containerID="1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3" Mar 14 00:15:04.645759 containerd[2131]: time="2026-03-14T00:15:04.645613778Z" level=info msg="RemoveContainer for \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\"" Mar 14 00:15:04.653517 containerd[2131]: time="2026-03-14T00:15:04.653368838Z" level=info msg="RemoveContainer for \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\" returns successfully" Mar 14 00:15:04.654232 kubelet[3533]: I0314 00:15:04.654171 3533 scope.go:117] "RemoveContainer" containerID="632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a" Mar 14 00:15:04.659018 containerd[2131]: time="2026-03-14T00:15:04.658796726Z" level=info msg="RemoveContainer for \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\"" Mar 14 00:15:04.665287 containerd[2131]: time="2026-03-14T00:15:04.665185226Z" level=info msg="RemoveContainer for \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\" returns successfully" Mar 14 00:15:04.666017 kubelet[3533]: I0314 00:15:04.665892 3533 scope.go:117] "RemoveContainer" containerID="17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837" Mar 14 00:15:04.668362 containerd[2131]: time="2026-03-14T00:15:04.667989338Z" level=info msg="RemoveContainer for \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\"" Mar 14 00:15:04.672502 containerd[2131]: time="2026-03-14T00:15:04.672443642Z" level=info msg="RemoveContainer for \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\" returns successfully" Mar 14 00:15:04.673235 kubelet[3533]: I0314 00:15:04.673087 3533 scope.go:117] "RemoveContainer" containerID="9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538" Mar 14 00:15:04.677672 containerd[2131]: time="2026-03-14T00:15:04.675325766Z" level=info msg="RemoveContainer for \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\"" Mar 14 00:15:04.681714 containerd[2131]: time="2026-03-14T00:15:04.681500690Z" level=info msg="RemoveContainer for \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\" returns successfully" Mar 14 00:15:04.682379 kubelet[3533]: I0314 00:15:04.682324 3533 scope.go:117] "RemoveContainer" containerID="e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379" Mar 14 00:15:04.684593 containerd[2131]: time="2026-03-14T00:15:04.684404174Z" level=error msg="ContainerStatus for \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\": not found" Mar 14 00:15:04.684854 kubelet[3533]: E0314 00:15:04.684770 3533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\": not found" containerID="e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379" Mar 14 00:15:04.684995 kubelet[3533]: I0314 00:15:04.684829 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379"} err="failed to get container status \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7288496a468a5aeb28ffc9791097e1b5aa4be542a245374b8daec0a99ae2379\": not found" Mar 14 00:15:04.684995 kubelet[3533]: I0314 00:15:04.684941 3533 scope.go:117] "RemoveContainer" containerID="1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3" Mar 14 00:15:04.685616 containerd[2131]: time="2026-03-14T00:15:04.685547882Z" level=error msg="ContainerStatus for \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\": not found" Mar 14 00:15:04.686001 kubelet[3533]: E0314 00:15:04.685952 3533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\": not found" containerID="1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3" Mar 14 00:15:04.686147 kubelet[3533]: I0314 00:15:04.686015 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3"} err="failed to get container status \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bb0b907781265e1d118ab1516cf2c1c9e0c868d64ab1b20af04ed3ddb3360a3\": not found" Mar 14 00:15:04.686147 kubelet[3533]: I0314 00:15:04.686081 3533 scope.go:117] "RemoveContainer" containerID="632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a" Mar 14 00:15:04.686856 kubelet[3533]: E0314 00:15:04.686784 3533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\": not found" containerID="632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a" Mar 14 00:15:04.686856 kubelet[3533]: I0314 00:15:04.686837 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a"} err="failed to get container status \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\": rpc error: code = NotFound desc = an error occurred when try to find container \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\": not found" Mar 14 00:15:04.686984 containerd[2131]: time="2026-03-14T00:15:04.686484302Z" level=error msg="ContainerStatus for \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"632c3ac077a5bbfff47d598260f14d51d85c5570bf3054bf2ea7687285c0a21a\": not found" Mar 14 00:15:04.687086 kubelet[3533]: I0314 00:15:04.686876 3533 scope.go:117] "RemoveContainer" containerID="17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837" Mar 14 00:15:04.687584 containerd[2131]: time="2026-03-14T00:15:04.687427874Z" level=error msg="ContainerStatus for \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\": not found" Mar 14 00:15:04.687789 kubelet[3533]: E0314 00:15:04.687734 3533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\": not found" containerID="17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837" Mar 14 00:15:04.687865 kubelet[3533]: I0314 00:15:04.687799 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837"} err="failed to get container status \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\": rpc error: code = NotFound desc = an error occurred when try to find container \"17049e4331edb4ac97a50b4f8311d2eb3f65107ff8c4745d6ee036248de0b837\": not found" Mar 14 00:15:04.687865 kubelet[3533]: I0314 00:15:04.687835 3533 scope.go:117] "RemoveContainer" containerID="9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538" Mar 14 00:15:04.688697 containerd[2131]: time="2026-03-14T00:15:04.688252886Z" level=error msg="ContainerStatus for \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\": not found" Mar 14 00:15:04.689250 kubelet[3533]: E0314 00:15:04.689198 3533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\": not found" containerID="9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538" Mar 14 00:15:04.689722 kubelet[3533]: I0314 00:15:04.689478 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538"} err="failed to get container status \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c3010c4fe9b2e45745638e770f270553b1fe8960a57cee9ef5a36d89beb6538\": not found" Mar 14 00:15:04.689722 kubelet[3533]: I0314 00:15:04.689572 3533 scope.go:117] "RemoveContainer" containerID="25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064" Mar 14 00:15:04.692795 containerd[2131]: time="2026-03-14T00:15:04.692725070Z" level=info msg="RemoveContainer for \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\"" Mar 14 00:15:04.694064 kubelet[3533]: I0314 00:15:04.693231 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-net\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694064 kubelet[3533]: I0314 00:15:04.693300 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cni-path\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694064 kubelet[3533]: I0314 00:15:04.693384 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4dsd\" (UniqueName: \"kubernetes.io/projected/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-kube-api-access-q4dsd\") pod \"e4d379db-3d6e-46e7-8fbe-6ee3981918d5\" (UID: \"e4d379db-3d6e-46e7-8fbe-6ee3981918d5\") " Mar 14 00:15:04.694064 kubelet[3533]: I0314 00:15:04.693404 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.694064 kubelet[3533]: I0314 00:15:04.693435 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-xtables-lock\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694064 kubelet[3533]: I0314 00:15:04.693497 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2128d7a-5792-4da1-af5d-caf312b35cca-clustermesh-secrets\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694495 kubelet[3533]: I0314 00:15:04.693533 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-run\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694495 kubelet[3533]: I0314 00:15:04.693569 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-cgroup\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694495 kubelet[3533]: I0314 00:15:04.693610 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-hubble-tls\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694495 kubelet[3533]: I0314 00:15:04.693643 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-lib-modules\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694495 kubelet[3533]: I0314 00:15:04.693682 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-config-path\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694495 kubelet[3533]: I0314 00:15:04.693725 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9kw5\" (UniqueName: \"kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-kube-api-access-x9kw5\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694837 kubelet[3533]: I0314 00:15:04.693762 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-kernel\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694837 kubelet[3533]: I0314 00:15:04.693794 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-bpf-maps\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694837 kubelet[3533]: I0314 00:15:04.693828 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-hostproc\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694837 kubelet[3533]: I0314 00:15:04.693878 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-cilium-config-path\") pod \"e4d379db-3d6e-46e7-8fbe-6ee3981918d5\" (UID: \"e4d379db-3d6e-46e7-8fbe-6ee3981918d5\") " Mar 14 00:15:04.694837 kubelet[3533]: I0314 00:15:04.693918 3533 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-etc-cni-netd\") pod \"d2128d7a-5792-4da1-af5d-caf312b35cca\" (UID: \"d2128d7a-5792-4da1-af5d-caf312b35cca\") " Mar 14 00:15:04.694837 kubelet[3533]: I0314 00:15:04.694005 3533 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-net\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.695254 kubelet[3533]: I0314 00:15:04.694100 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.695254 kubelet[3533]: I0314 00:15:04.694153 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.697088 kubelet[3533]: I0314 00:15:04.695684 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.699987 containerd[2131]: time="2026-03-14T00:15:04.699910478Z" level=info msg="RemoveContainer for \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\" returns successfully" Mar 14 00:15:04.701274 kubelet[3533]: I0314 00:15:04.701208 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.702270 kubelet[3533]: I0314 00:15:04.702195 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.702463 kubelet[3533]: I0314 00:15:04.702293 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.702463 kubelet[3533]: I0314 00:15:04.702339 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.706549 kubelet[3533]: I0314 00:15:04.706278 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.707338 kubelet[3533]: I0314 00:15:04.707174 3533 scope.go:117] "RemoveContainer" containerID="25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064" Mar 14 00:15:04.708175 kubelet[3533]: I0314 00:15:04.708094 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:15:04.708892 containerd[2131]: time="2026-03-14T00:15:04.708743114Z" level=error msg="ContainerStatus for \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\": not found" Mar 14 00:15:04.709595 kubelet[3533]: E0314 00:15:04.709552 3533 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\": not found" containerID="25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064" Mar 14 00:15:04.711186 kubelet[3533]: I0314 00:15:04.711110 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064"} err="failed to get container status \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\": rpc error: code = NotFound desc = an error occurred when try to find container \"25429b629538c7c59f05bc5e4b0be64b6f9d6d0e3aa18b8ce2995827e8630064\": not found" Mar 14 00:15:04.715527 kubelet[3533]: I0314 00:15:04.715275 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-kube-api-access-q4dsd" (OuterVolumeSpecName: "kube-api-access-q4dsd") pod "e4d379db-3d6e-46e7-8fbe-6ee3981918d5" (UID: "e4d379db-3d6e-46e7-8fbe-6ee3981918d5"). InnerVolumeSpecName "kube-api-access-q4dsd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:15:04.715527 kubelet[3533]: I0314 00:15:04.715418 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2128d7a-5792-4da1-af5d-caf312b35cca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:15:04.717350 kubelet[3533]: I0314 00:15:04.717227 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:15:04.718087 kubelet[3533]: I0314 00:15:04.717973 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:15:04.718234 kubelet[3533]: I0314 00:15:04.718011 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-kube-api-access-x9kw5" (OuterVolumeSpecName: "kube-api-access-x9kw5") pod "d2128d7a-5792-4da1-af5d-caf312b35cca" (UID: "d2128d7a-5792-4da1-af5d-caf312b35cca"). InnerVolumeSpecName "kube-api-access-x9kw5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:15:04.718557 kubelet[3533]: I0314 00:15:04.718516 3533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4d379db-3d6e-46e7-8fbe-6ee3981918d5" (UID: "e4d379db-3d6e-46e7-8fbe-6ee3981918d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:15:04.794498 kubelet[3533]: I0314 00:15:04.794448 3533 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-run\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794498 kubelet[3533]: I0314 00:15:04.794501 3533 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-cgroup\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794528 3533 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-hubble-tls\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794550 3533 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-lib-modules\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794573 3533 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2128d7a-5792-4da1-af5d-caf312b35cca-cilium-config-path\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794596 3533 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x9kw5\" (UniqueName: \"kubernetes.io/projected/d2128d7a-5792-4da1-af5d-caf312b35cca-kube-api-access-x9kw5\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794620 3533 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-host-proc-sys-kernel\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794654 3533 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-bpf-maps\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794675 3533 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-hostproc\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.794737 kubelet[3533]: I0314 00:15:04.794695 3533 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-cilium-config-path\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.795211 kubelet[3533]: I0314 00:15:04.794718 3533 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-etc-cni-netd\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.795211 kubelet[3533]: I0314 00:15:04.794738 3533 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-cni-path\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.795211 kubelet[3533]: I0314 00:15:04.794759 3533 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4dsd\" (UniqueName: \"kubernetes.io/projected/e4d379db-3d6e-46e7-8fbe-6ee3981918d5-kube-api-access-q4dsd\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.795211 kubelet[3533]: I0314 00:15:04.794781 3533 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2128d7a-5792-4da1-af5d-caf312b35cca-xtables-lock\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:04.795211 kubelet[3533]: I0314 00:15:04.794804 3533 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2128d7a-5792-4da1-af5d-caf312b35cca-clustermesh-secrets\") on node \"ip-172-31-24-247\" DevicePath \"\"" Mar 14 00:15:05.200941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4-rootfs.mount: Deactivated successfully. Mar 14 00:15:05.201267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae-rootfs.mount: Deactivated successfully. Mar 14 00:15:05.201518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae-shm.mount: Deactivated successfully. Mar 14 00:15:05.201763 systemd[1]: var-lib-kubelet-pods-e4d379db\x2d3d6e\x2d46e7\x2d8fbe\x2d6ee3981918d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4dsd.mount: Deactivated successfully. Mar 14 00:15:05.202011 systemd[1]: var-lib-kubelet-pods-d2128d7a\x2d5792\x2d4da1\x2daf5d\x2dcaf312b35cca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx9kw5.mount: Deactivated successfully. Mar 14 00:15:05.202285 systemd[1]: var-lib-kubelet-pods-d2128d7a\x2d5792\x2d4da1\x2daf5d\x2dcaf312b35cca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:15:05.202520 systemd[1]: var-lib-kubelet-pods-d2128d7a\x2d5792\x2d4da1\x2daf5d\x2dcaf312b35cca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:15:06.039405 kubelet[3533]: I0314 00:15:06.039333 3533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2128d7a-5792-4da1-af5d-caf312b35cca" path="/var/lib/kubelet/pods/d2128d7a-5792-4da1-af5d-caf312b35cca/volumes" Mar 14 00:15:06.040978 kubelet[3533]: I0314 00:15:06.040914 3533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d379db-3d6e-46e7-8fbe-6ee3981918d5" path="/var/lib/kubelet/pods/e4d379db-3d6e-46e7-8fbe-6ee3981918d5/volumes" Mar 14 00:15:06.167805 sshd[5145]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:06.176440 systemd[1]: sshd@22-172.31.24.247:22-68.220.241.50:51208.service: Deactivated successfully. Mar 14 00:15:06.182188 systemd-logind[2105]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:15:06.182880 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:15:06.188735 systemd-logind[2105]: Removed session 23. Mar 14 00:15:06.254509 systemd[1]: Started sshd@23-172.31.24.247:22-68.220.241.50:58918.service - OpenSSH per-connection server daemon (68.220.241.50:58918). Mar 14 00:15:06.278901 kubelet[3533]: E0314 00:15:06.278801 3533 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:15:06.502607 ntpd[2085]: Deleting interface #10 lxc_health, fe80::3401:90ff:fe4a:1abe%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Mar 14 00:15:06.503581 ntpd[2085]: 14 Mar 00:15:06 ntpd[2085]: Deleting interface #10 lxc_health, fe80::3401:90ff:fe4a:1abe%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Mar 14 00:15:06.773863 sshd[5315]: Accepted publickey for core from 68.220.241.50 port 58918 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:06.776919 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:06.785742 systemd-logind[2105]: New session 24 of user core. Mar 14 00:15:06.801707 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:15:08.303065 kubelet[3533]: I0314 00:15:08.299203 3533 setters.go:618] "Node became not ready" node="ip-172-31-24-247" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T00:15:08Z","lastTransitionTime":"2026-03-14T00:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 00:15:09.716440 sshd[5315]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:09.731390 kubelet[3533]: I0314 00:15:09.731330 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-cilium-run\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.731985 kubelet[3533]: I0314 00:15:09.731402 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-hostproc\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.731985 kubelet[3533]: I0314 00:15:09.731448 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-xtables-lock\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.731985 kubelet[3533]: I0314 00:15:09.731491 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsmxh\" (UniqueName: \"kubernetes.io/projected/5d95d793-f034-4709-b19c-34ea26c1ebe3-kube-api-access-fsmxh\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.731985 kubelet[3533]: I0314 00:15:09.731531 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-bpf-maps\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.731985 kubelet[3533]: I0314 00:15:09.731568 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-lib-modules\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.731985 kubelet[3533]: I0314 00:15:09.731607 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d95d793-f034-4709-b19c-34ea26c1ebe3-cilium-ipsec-secrets\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.737202 kubelet[3533]: I0314 00:15:09.731644 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-host-proc-sys-kernel\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.737202 kubelet[3533]: I0314 00:15:09.731685 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-cilium-cgroup\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.737202 kubelet[3533]: I0314 00:15:09.731723 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-etc-cni-netd\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.737202 kubelet[3533]: I0314 00:15:09.731759 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d95d793-f034-4709-b19c-34ea26c1ebe3-clustermesh-secrets\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.737202 kubelet[3533]: I0314 00:15:09.731800 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-host-proc-sys-net\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.737202 kubelet[3533]: I0314 00:15:09.731837 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d95d793-f034-4709-b19c-34ea26c1ebe3-hubble-tls\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.732305 systemd[1]: sshd@23-172.31.24.247:22-68.220.241.50:58918.service: Deactivated successfully. Mar 14 00:15:09.744395 kubelet[3533]: I0314 00:15:09.731874 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d95d793-f034-4709-b19c-34ea26c1ebe3-cilium-config-path\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.744395 kubelet[3533]: I0314 00:15:09.731913 3533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d95d793-f034-4709-b19c-34ea26c1ebe3-cni-path\") pod \"cilium-xc2vt\" (UID: \"5d95d793-f034-4709-b19c-34ea26c1ebe3\") " pod="kube-system/cilium-xc2vt" Mar 14 00:15:09.748916 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:15:09.758348 systemd-logind[2105]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:15:09.763922 systemd-logind[2105]: Removed session 24. Mar 14 00:15:09.805529 systemd[1]: Started sshd@24-172.31.24.247:22-68.220.241.50:58934.service - OpenSSH per-connection server daemon (68.220.241.50:58934). Mar 14 00:15:10.000870 containerd[2131]: time="2026-03-14T00:15:10.000162965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xc2vt,Uid:5d95d793-f034-4709-b19c-34ea26c1ebe3,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:10.042656 containerd[2131]: time="2026-03-14T00:15:10.042483401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:10.042656 containerd[2131]: time="2026-03-14T00:15:10.042606317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:10.042656 containerd[2131]: time="2026-03-14T00:15:10.042644849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:10.043643 containerd[2131]: time="2026-03-14T00:15:10.043093433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:10.109205 containerd[2131]: time="2026-03-14T00:15:10.109051397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xc2vt,Uid:5d95d793-f034-4709-b19c-34ea26c1ebe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\"" Mar 14 00:15:10.118046 containerd[2131]: time="2026-03-14T00:15:10.117949565Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:15:10.133267 containerd[2131]: time="2026-03-14T00:15:10.133136477Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1397f5336f1ff9de89a9f52da2c5353fb5d0909a1f3758d57e1537e670ca642\"" Mar 14 00:15:10.134830 containerd[2131]: time="2026-03-14T00:15:10.134245325Z" level=info msg="StartContainer for \"e1397f5336f1ff9de89a9f52da2c5353fb5d0909a1f3758d57e1537e670ca642\"" Mar 14 00:15:10.251783 containerd[2131]: time="2026-03-14T00:15:10.251616774Z" level=info msg="StartContainer for \"e1397f5336f1ff9de89a9f52da2c5353fb5d0909a1f3758d57e1537e670ca642\" returns successfully" Mar 14 00:15:10.320073 containerd[2131]: time="2026-03-14T00:15:10.319891050Z" level=info msg="shim disconnected" id=e1397f5336f1ff9de89a9f52da2c5353fb5d0909a1f3758d57e1537e670ca642 namespace=k8s.io Mar 14 00:15:10.320073 containerd[2131]: time="2026-03-14T00:15:10.319965594Z" level=warning msg="cleaning up after shim disconnected" id=e1397f5336f1ff9de89a9f52da2c5353fb5d0909a1f3758d57e1537e670ca642 namespace=k8s.io Mar 14 00:15:10.320073 containerd[2131]: time="2026-03-14T00:15:10.319988874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:10.323094 sshd[5327]: Accepted publickey for core from 68.220.241.50 port 58934 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:10.328604 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:10.339475 systemd-logind[2105]: New session 25 of user core. Mar 14 00:15:10.349006 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:15:10.668727 containerd[2131]: time="2026-03-14T00:15:10.668654324Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:15:10.680519 sshd[5327]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:10.699625 systemd[1]: sshd@24-172.31.24.247:22-68.220.241.50:58934.service: Deactivated successfully. Mar 14 00:15:10.710465 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:15:10.713464 systemd-logind[2105]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:15:10.718413 systemd-logind[2105]: Removed session 25. Mar 14 00:15:10.725710 containerd[2131]: time="2026-03-14T00:15:10.725519612Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"23e86f471d4c39e48443cc178f7277d2c9f60ce6131ab9474dd85617ddb0ba9a\"" Mar 14 00:15:10.726790 containerd[2131]: time="2026-03-14T00:15:10.726547280Z" level=info msg="StartContainer for \"23e86f471d4c39e48443cc178f7277d2c9f60ce6131ab9474dd85617ddb0ba9a\"" Mar 14 00:15:10.766528 systemd[1]: Started sshd@25-172.31.24.247:22-68.220.241.50:58950.service - OpenSSH per-connection server daemon (68.220.241.50:58950). Mar 14 00:15:10.829312 containerd[2131]: time="2026-03-14T00:15:10.829212993Z" level=info msg="StartContainer for \"23e86f471d4c39e48443cc178f7277d2c9f60ce6131ab9474dd85617ddb0ba9a\" returns successfully" Mar 14 00:15:10.908480 containerd[2131]: time="2026-03-14T00:15:10.908318325Z" level=info msg="shim disconnected" id=23e86f471d4c39e48443cc178f7277d2c9f60ce6131ab9474dd85617ddb0ba9a namespace=k8s.io Mar 14 00:15:10.908480 containerd[2131]: time="2026-03-14T00:15:10.908395005Z" level=warning msg="cleaning up after shim disconnected" id=23e86f471d4c39e48443cc178f7277d2c9f60ce6131ab9474dd85617ddb0ba9a namespace=k8s.io Mar 14 00:15:10.908480 containerd[2131]: time="2026-03-14T00:15:10.908420049Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:10.908836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23e86f471d4c39e48443cc178f7277d2c9f60ce6131ab9474dd85617ddb0ba9a-rootfs.mount: Deactivated successfully. Mar 14 00:15:11.277590 sshd[5460]: Accepted publickey for core from 68.220.241.50 port 58950 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:11.280330 kubelet[3533]: E0314 00:15:11.280215 3533 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:15:11.281571 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:11.289683 systemd-logind[2105]: New session 26 of user core. Mar 14 00:15:11.297567 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:15:11.681755 containerd[2131]: time="2026-03-14T00:15:11.681397065Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:15:11.710544 containerd[2131]: time="2026-03-14T00:15:11.707980005Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eb400cd6fb90c0593040bd8298aed5458225ac892122e84f59ffb70a35cecf76\"" Mar 14 00:15:11.718106 containerd[2131]: time="2026-03-14T00:15:11.714832317Z" level=info msg="StartContainer for \"eb400cd6fb90c0593040bd8298aed5458225ac892122e84f59ffb70a35cecf76\"" Mar 14 00:15:11.842012 containerd[2131]: time="2026-03-14T00:15:11.841932250Z" level=info msg="StartContainer for \"eb400cd6fb90c0593040bd8298aed5458225ac892122e84f59ffb70a35cecf76\" returns successfully" Mar 14 00:15:11.896677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb400cd6fb90c0593040bd8298aed5458225ac892122e84f59ffb70a35cecf76-rootfs.mount: Deactivated successfully. Mar 14 00:15:11.899542 containerd[2131]: time="2026-03-14T00:15:11.899469550Z" level=info msg="shim disconnected" id=eb400cd6fb90c0593040bd8298aed5458225ac892122e84f59ffb70a35cecf76 namespace=k8s.io Mar 14 00:15:11.899879 containerd[2131]: time="2026-03-14T00:15:11.899731150Z" level=warning msg="cleaning up after shim disconnected" id=eb400cd6fb90c0593040bd8298aed5458225ac892122e84f59ffb70a35cecf76 namespace=k8s.io Mar 14 00:15:11.899879 containerd[2131]: time="2026-03-14T00:15:11.899759782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:12.686841 containerd[2131]: time="2026-03-14T00:15:12.686784742Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:15:12.729911 containerd[2131]: time="2026-03-14T00:15:12.729836122Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f276530f9affcb71ed1ccb5620116921d93369d44f40dd150c54789eeb78328\"" Mar 14 00:15:12.735614 containerd[2131]: time="2026-03-14T00:15:12.735232546Z" level=info msg="StartContainer for \"8f276530f9affcb71ed1ccb5620116921d93369d44f40dd150c54789eeb78328\"" Mar 14 00:15:12.886675 containerd[2131]: time="2026-03-14T00:15:12.886595699Z" level=info msg="StartContainer for \"8f276530f9affcb71ed1ccb5620116921d93369d44f40dd150c54789eeb78328\" returns successfully" Mar 14 00:15:12.940447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f276530f9affcb71ed1ccb5620116921d93369d44f40dd150c54789eeb78328-rootfs.mount: Deactivated successfully. Mar 14 00:15:12.943550 containerd[2131]: time="2026-03-14T00:15:12.940698503Z" level=info msg="shim disconnected" id=8f276530f9affcb71ed1ccb5620116921d93369d44f40dd150c54789eeb78328 namespace=k8s.io Mar 14 00:15:12.943550 containerd[2131]: time="2026-03-14T00:15:12.940772063Z" level=warning msg="cleaning up after shim disconnected" id=8f276530f9affcb71ed1ccb5620116921d93369d44f40dd150c54789eeb78328 namespace=k8s.io Mar 14 00:15:12.943550 containerd[2131]: time="2026-03-14T00:15:12.940792643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:13.700071 containerd[2131]: time="2026-03-14T00:15:13.699990191Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:15:13.727678 containerd[2131]: time="2026-03-14T00:15:13.726824435Z" level=info msg="CreateContainer within sandbox \"fd66cffee90496aa33efb79ecd5b6dc9a18cbd9a9e2ba366e4986ec956e12878\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d4c05bc2d2f10ac5907e6ca58b4cae8604c35388373526a38a834c9a148c047\"" Mar 14 00:15:13.731969 containerd[2131]: time="2026-03-14T00:15:13.731530943Z" level=info msg="StartContainer for \"4d4c05bc2d2f10ac5907e6ca58b4cae8604c35388373526a38a834c9a148c047\"" Mar 14 00:15:13.862842 containerd[2131]: time="2026-03-14T00:15:13.862757436Z" level=info msg="StartContainer for \"4d4c05bc2d2f10ac5907e6ca58b4cae8604c35388373526a38a834c9a148c047\" returns successfully" Mar 14 00:15:14.036251 kubelet[3533]: E0314 00:15:14.035576 3533 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-5scdf" podUID="d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1" Mar 14 00:15:14.758951 kubelet[3533]: I0314 00:15:14.758697 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xc2vt" podStartSLOduration=5.758674116 podStartE2EDuration="5.758674116s" podCreationTimestamp="2026-03-14 00:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:14.757988712 +0000 UTC m=+119.040143848" watchObservedRunningTime="2026-03-14 00:15:14.758674116 +0000 UTC m=+119.040829276" Mar 14 00:15:14.760090 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 14 00:15:15.991166 containerd[2131]: time="2026-03-14T00:15:15.991016438Z" level=info msg="StopPodSandbox for \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\"" Mar 14 00:15:15.991793 containerd[2131]: time="2026-03-14T00:15:15.991191002Z" level=info msg="TearDown network for sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" successfully" Mar 14 00:15:15.991793 containerd[2131]: time="2026-03-14T00:15:15.991216994Z" level=info msg="StopPodSandbox for \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" returns successfully" Mar 14 00:15:15.993273 containerd[2131]: time="2026-03-14T00:15:15.991967414Z" level=info msg="RemovePodSandbox for \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\"" Mar 14 00:15:15.993273 containerd[2131]: time="2026-03-14T00:15:15.992076722Z" level=info msg="Forcibly stopping sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\"" Mar 14 00:15:15.993273 containerd[2131]: time="2026-03-14T00:15:15.992186582Z" level=info msg="TearDown network for sandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" successfully" Mar 14 00:15:15.997230 containerd[2131]: time="2026-03-14T00:15:15.997097858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:15.997508 containerd[2131]: time="2026-03-14T00:15:15.997255358Z" level=info msg="RemovePodSandbox \"e427cdabd1a228c3e37b5c3c10acd852d00c36cd3acf069a9e1a7e66f906b7ae\" returns successfully" Mar 14 00:15:16.000837 containerd[2131]: time="2026-03-14T00:15:16.000762323Z" level=info msg="StopPodSandbox for \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\"" Mar 14 00:15:16.001075 containerd[2131]: time="2026-03-14T00:15:16.000948887Z" level=info msg="TearDown network for sandbox \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" successfully" Mar 14 00:15:16.001075 containerd[2131]: time="2026-03-14T00:15:16.000980303Z" level=info msg="StopPodSandbox for \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" returns successfully" Mar 14 00:15:16.004410 containerd[2131]: time="2026-03-14T00:15:16.004346699Z" level=info msg="RemovePodSandbox for \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\"" Mar 14 00:15:16.004605 containerd[2131]: time="2026-03-14T00:15:16.004414907Z" level=info msg="Forcibly stopping sandbox \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\"" Mar 14 00:15:16.004605 containerd[2131]: time="2026-03-14T00:15:16.004535543Z" level=info msg="TearDown network for sandbox \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" successfully" Mar 14 00:15:16.015113 containerd[2131]: time="2026-03-14T00:15:16.013299935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:16.015113 containerd[2131]: time="2026-03-14T00:15:16.013402319Z" level=info msg="RemovePodSandbox \"c4d72a4c7dd56c56087ed36c882a957806a1acce2aad7a2b95d5607fbd1b70a4\" returns successfully" Mar 14 00:15:16.037103 kubelet[3533]: E0314 00:15:16.036437 3533 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-5scdf" podUID="d37b2ddf-127a-49fd-8b3f-5ec9bf2131f1" Mar 14 00:15:18.224679 systemd[1]: run-containerd-runc-k8s.io-4d4c05bc2d2f10ac5907e6ca58b4cae8604c35388373526a38a834c9a148c047-runc.suQ2cn.mount: Deactivated successfully. Mar 14 00:15:19.135147 systemd-networkd[1696]: lxc_health: Link UP Mar 14 00:15:19.147316 systemd-networkd[1696]: lxc_health: Gained carrier Mar 14 00:15:19.164204 (udev-worker)[6176]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:15:20.522677 systemd[1]: run-containerd-runc-k8s.io-4d4c05bc2d2f10ac5907e6ca58b4cae8604c35388373526a38a834c9a148c047-runc.luVg07.mount: Deactivated successfully. Mar 14 00:15:20.991225 systemd-networkd[1696]: lxc_health: Gained IPv6LL Mar 14 00:15:23.502781 ntpd[2085]: Listen normally on 13 lxc_health [fe80::c4be:d5ff:fe69:d301%14]:123 Mar 14 00:15:23.503485 ntpd[2085]: 14 Mar 00:15:23 ntpd[2085]: Listen normally on 13 lxc_health [fe80::c4be:d5ff:fe69:d301%14]:123 Mar 14 00:15:25.380415 sshd[5460]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:25.389696 systemd[1]: sshd@25-172.31.24.247:22-68.220.241.50:58950.service: Deactivated successfully. Mar 14 00:15:25.401021 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:15:25.401321 systemd-logind[2105]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:15:25.409917 systemd-logind[2105]: Removed session 26. Mar 14 00:15:40.695226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58566ec2d83091d1d59eff6cdfb7213482e4da71ac56a8ff8932f9e5b8737807-rootfs.mount: Deactivated successfully. Mar 14 00:15:40.725409 containerd[2131]: time="2026-03-14T00:15:40.725308825Z" level=info msg="shim disconnected" id=58566ec2d83091d1d59eff6cdfb7213482e4da71ac56a8ff8932f9e5b8737807 namespace=k8s.io Mar 14 00:15:40.725409 containerd[2131]: time="2026-03-14T00:15:40.725389453Z" level=warning msg="cleaning up after shim disconnected" id=58566ec2d83091d1d59eff6cdfb7213482e4da71ac56a8ff8932f9e5b8737807 namespace=k8s.io Mar 14 00:15:40.725409 containerd[2131]: time="2026-03-14T00:15:40.725413153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:40.798102 kubelet[3533]: I0314 00:15:40.796600 3533 scope.go:117] "RemoveContainer" containerID="58566ec2d83091d1d59eff6cdfb7213482e4da71ac56a8ff8932f9e5b8737807" Mar 14 00:15:40.801081 containerd[2131]: time="2026-03-14T00:15:40.800713970Z" level=info msg="CreateContainer within sandbox \"c9a1187bfbd24a8b39330214835805d9caa9944b7aeecbb9e91522bbaa69e8fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:15:40.822922 containerd[2131]: time="2026-03-14T00:15:40.822837122Z" level=info msg="CreateContainer within sandbox \"c9a1187bfbd24a8b39330214835805d9caa9944b7aeecbb9e91522bbaa69e8fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1942c578a6b9846f6cbfa6c5c0481a7477ae6093efe72d94cd72dbaf04b0cd8a\"" Mar 14 00:15:40.824392 containerd[2131]: time="2026-03-14T00:15:40.824340134Z" level=info msg="StartContainer for \"1942c578a6b9846f6cbfa6c5c0481a7477ae6093efe72d94cd72dbaf04b0cd8a\"" Mar 14 00:15:40.826244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869767220.mount: Deactivated successfully. Mar 14 00:15:40.949480 containerd[2131]: time="2026-03-14T00:15:40.948845858Z" level=info msg="StartContainer for \"1942c578a6b9846f6cbfa6c5c0481a7477ae6093efe72d94cd72dbaf04b0cd8a\" returns successfully" Mar 14 00:15:41.696926 systemd[1]: run-containerd-runc-k8s.io-1942c578a6b9846f6cbfa6c5c0481a7477ae6093efe72d94cd72dbaf04b0cd8a-runc.ROitu4.mount: Deactivated successfully. Mar 14 00:15:45.820159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4886083bc7ca292ef6524379713edce293d166ac9dc7e8d3ebf0d89eb81b75e-rootfs.mount: Deactivated successfully. Mar 14 00:15:45.835819 containerd[2131]: time="2026-03-14T00:15:45.835684075Z" level=info msg="shim disconnected" id=c4886083bc7ca292ef6524379713edce293d166ac9dc7e8d3ebf0d89eb81b75e namespace=k8s.io Mar 14 00:15:45.835819 containerd[2131]: time="2026-03-14T00:15:45.835768219Z" level=warning msg="cleaning up after shim disconnected" id=c4886083bc7ca292ef6524379713edce293d166ac9dc7e8d3ebf0d89eb81b75e namespace=k8s.io Mar 14 00:15:45.835819 containerd[2131]: time="2026-03-14T00:15:45.835794343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:45.856529 containerd[2131]: time="2026-03-14T00:15:45.856463287Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:15:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:15:46.824429 kubelet[3533]: I0314 00:15:46.823942 3533 scope.go:117] "RemoveContainer" containerID="c4886083bc7ca292ef6524379713edce293d166ac9dc7e8d3ebf0d89eb81b75e" Mar 14 00:15:46.828165 containerd[2131]: time="2026-03-14T00:15:46.827258816Z" level=info msg="CreateContainer within sandbox \"cde0e43e856853a0b311fbe76837ac7dc368b028f6a0b53bf30944d1711b4723\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:15:46.848016 containerd[2131]: time="2026-03-14T00:15:46.847293032Z" level=info msg="CreateContainer within sandbox \"cde0e43e856853a0b311fbe76837ac7dc368b028f6a0b53bf30944d1711b4723\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8b26ef63aa05b5fcea05072a51c4c0576f36ead292bc5ad507d47eec5cf40f53\"" Mar 14 00:15:46.849874 containerd[2131]: time="2026-03-14T00:15:46.848899124Z" level=info msg="StartContainer for \"8b26ef63aa05b5fcea05072a51c4c0576f36ead292bc5ad507d47eec5cf40f53\"" Mar 14 00:15:46.962869 containerd[2131]: time="2026-03-14T00:15:46.962808872Z" level=info msg="StartContainer for \"8b26ef63aa05b5fcea05072a51c4c0576f36ead292bc5ad507d47eec5cf40f53\" returns successfully" Mar 14 00:15:48.517998 kubelet[3533]: E0314 00:15:48.517498 3533 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-247?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:15:58.519132 kubelet[3533]: E0314 00:15:58.517779 3533 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-247?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"