Mar 6 00:56:17.125728 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 6 00:56:17.125778 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Mar 5 23:10:47 -00 2026 Mar 6 00:56:17.125804 kernel: KASLR disabled due to lack of seed Mar 6 00:56:17.125821 kernel: efi: EFI v2.7 by EDK II Mar 6 00:56:17.125837 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Mar 6 00:56:17.125853 kernel: secureboot: Secure boot disabled Mar 6 00:56:17.125871 kernel: ACPI: Early table checksum verification disabled Mar 6 00:56:17.125886 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 6 00:56:17.125902 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 6 00:56:17.125918 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 6 00:56:17.125935 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 6 00:56:17.125956 kernel: ACPI: FACS 0x0000000078630000 000040 Mar 6 00:56:17.125972 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 6 00:56:17.125987 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 6 00:56:17.126007 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 6 00:56:17.126023 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 6 00:56:17.126045 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 6 00:56:17.126062 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 6 00:56:17.126078 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 6 00:56:17.126095 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 6 00:56:17.126112 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 6 00:56:17.126128 kernel: printk: legacy bootconsole [uart0] enabled Mar 6 00:56:17.126145 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 6 00:56:17.130275 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 6 00:56:17.130297 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Mar 6 00:56:17.130315 kernel: Zone ranges: Mar 6 00:56:17.130334 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 6 00:56:17.130364 kernel: DMA32 empty Mar 6 00:56:17.130382 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 6 00:56:17.130399 kernel: Device empty Mar 6 00:56:17.130416 kernel: Movable zone start for each node Mar 6 00:56:17.130434 kernel: Early memory node ranges Mar 6 00:56:17.130451 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 6 00:56:17.130467 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 6 00:56:17.130484 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 6 00:56:17.130501 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 6 00:56:17.130517 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 6 00:56:17.130532 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 6 00:56:17.130548 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 6 00:56:17.130572 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 6 00:56:17.130596 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 6 00:56:17.130615 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 6 00:56:17.130634 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Mar 6 00:56:17.130652 kernel: psci: probing for conduit method from ACPI. Mar 6 00:56:17.130674 kernel: psci: PSCIv1.0 detected in firmware. Mar 6 00:56:17.130692 kernel: psci: Using standard PSCI v0.2 function IDs Mar 6 00:56:17.130709 kernel: psci: Trusted OS migration not required Mar 6 00:56:17.130727 kernel: psci: SMC Calling Convention v1.1 Mar 6 00:56:17.130745 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 6 00:56:17.130764 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 6 00:56:17.130781 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 6 00:56:17.130801 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 6 00:56:17.130819 kernel: Detected PIPT I-cache on CPU0 Mar 6 00:56:17.130837 kernel: CPU features: detected: GIC system register CPU interface Mar 6 00:56:17.130881 kernel: CPU features: detected: Spectre-v2 Mar 6 00:56:17.130910 kernel: CPU features: detected: Spectre-v3a Mar 6 00:56:17.130929 kernel: CPU features: detected: Spectre-BHB Mar 6 00:56:17.130947 kernel: CPU features: detected: ARM erratum 1742098 Mar 6 00:56:17.130965 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 6 00:56:17.130983 kernel: alternatives: applying boot alternatives Mar 6 00:56:17.131004 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=68c9ef230e3eed1360dd8114dada95b6a934f07952c3a5d42725f3006977f027 Mar 6 00:56:17.131023 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 00:56:17.131042 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 00:56:17.131061 kernel: Fallback order for Node 0: 0 Mar 6 00:56:17.131079 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Mar 6 00:56:17.131097 kernel: Policy zone: Normal Mar 6 00:56:17.131122 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 00:56:17.131139 kernel: software IO TLB: area num 2. Mar 6 00:56:17.131196 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Mar 6 00:56:17.131218 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 6 00:56:17.131236 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 00:56:17.131256 kernel: rcu: RCU event tracing is enabled. Mar 6 00:56:17.131274 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 6 00:56:17.131293 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 00:56:17.131311 kernel: Tracing variant of Tasks RCU enabled. Mar 6 00:56:17.131329 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 00:56:17.131347 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 6 00:56:17.131374 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 00:56:17.131394 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 00:56:17.131411 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 6 00:56:17.131429 kernel: GICv3: 96 SPIs implemented Mar 6 00:56:17.131447 kernel: GICv3: 0 Extended SPIs implemented Mar 6 00:56:17.131464 kernel: Root IRQ handler: gic_handle_irq Mar 6 00:56:17.131482 kernel: GICv3: GICv3 features: 16 PPIs Mar 6 00:56:17.131499 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 6 00:56:17.131517 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 6 00:56:17.131535 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 6 00:56:17.131553 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Mar 6 00:56:17.131573 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Mar 6 00:56:17.131599 kernel: GICv3: using LPI property table @0x0000000400110000 Mar 6 00:56:17.131618 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 6 00:56:17.131635 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Mar 6 00:56:17.131653 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 00:56:17.131671 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 6 00:56:17.131689 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 6 00:56:17.131707 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 6 00:56:17.131724 kernel: Console: colour dummy device 80x25 Mar 6 00:56:17.131743 kernel: printk: legacy console [tty1] enabled Mar 6 00:56:17.131762 kernel: ACPI: Core revision 20240827 Mar 6 00:56:17.131781 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 6 00:56:17.131805 kernel: pid_max: default: 32768 minimum: 301 Mar 6 00:56:17.131823 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 00:56:17.131841 kernel: landlock: Up and running. Mar 6 00:56:17.131859 kernel: SELinux: Initializing. Mar 6 00:56:17.131878 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 00:56:17.131896 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 00:56:17.131914 kernel: rcu: Hierarchical SRCU implementation. Mar 6 00:56:17.131933 kernel: rcu: Max phase no-delay instances is 400. Mar 6 00:56:17.131957 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 00:56:17.131976 kernel: Remapping and enabling EFI services. Mar 6 00:56:17.131994 kernel: smp: Bringing up secondary CPUs ... Mar 6 00:56:17.132012 kernel: Detected PIPT I-cache on CPU1 Mar 6 00:56:17.132031 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 6 00:56:17.132049 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Mar 6 00:56:17.132067 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 6 00:56:17.132084 kernel: smp: Brought up 1 node, 2 CPUs Mar 6 00:56:17.132102 kernel: SMP: Total of 2 processors activated. Mar 6 00:56:17.132126 kernel: CPU: All CPU(s) started at EL1 Mar 6 00:56:17.136218 kernel: CPU features: detected: 32-bit EL0 Support Mar 6 00:56:17.136253 kernel: CPU features: detected: 32-bit EL1 Support Mar 6 00:56:17.136277 kernel: CPU features: detected: CRC32 instructions Mar 6 00:56:17.136297 kernel: alternatives: applying system-wide alternatives Mar 6 00:56:17.136317 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Mar 6 00:56:17.136337 kernel: devtmpfs: initialized Mar 6 00:56:17.136356 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 00:56:17.136379 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 6 00:56:17.136398 kernel: 16880 pages in range for non-PLT usage Mar 6 00:56:17.136416 kernel: 508400 pages in range for PLT usage Mar 6 00:56:17.136434 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 00:56:17.136452 kernel: SMBIOS 3.0.0 present. Mar 6 00:56:17.136470 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 6 00:56:17.136488 kernel: DMI: Memory slots populated: 0/0 Mar 6 00:56:17.136506 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 00:56:17.136525 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 6 00:56:17.136548 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 6 00:56:17.136567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 6 00:56:17.136586 kernel: audit: initializing netlink subsys (disabled) Mar 6 00:56:17.136604 kernel: audit: type=2000 audit(0.231:1): state=initialized audit_enabled=0 res=1 Mar 6 00:56:17.136623 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 00:56:17.136642 kernel: cpuidle: using governor menu Mar 6 00:56:17.136660 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 6 00:56:17.136679 kernel: ASID allocator initialised with 65536 entries Mar 6 00:56:17.136698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 00:56:17.136720 kernel: Serial: AMBA PL011 UART driver Mar 6 00:56:17.136739 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 00:56:17.136758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 00:56:17.136776 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 6 00:56:17.136794 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 6 00:56:17.136813 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 00:56:17.136832 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 00:56:17.136850 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 6 00:56:17.136869 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 6 00:56:17.136892 kernel: ACPI: Added _OSI(Module Device) Mar 6 00:56:17.136911 kernel: ACPI: Added _OSI(Processor Device) Mar 6 00:56:17.136930 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 00:56:17.136948 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 00:56:17.136966 kernel: ACPI: Interpreter enabled Mar 6 00:56:17.136985 kernel: ACPI: Using GIC for interrupt routing Mar 6 00:56:17.137003 kernel: ACPI: MCFG table detected, 1 entries Mar 6 00:56:17.137022 kernel: ACPI: CPU0 has been hot-added Mar 6 00:56:17.137040 kernel: ACPI: CPU1 has been hot-added Mar 6 00:56:17.137063 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 6 00:56:17.137434 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 00:56:17.137655 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 6 00:56:17.137855 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 6 00:56:17.138050 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 6 00:56:17.142421 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 6 00:56:17.142470 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 6 00:56:17.142502 kernel: acpiphp: Slot [1] registered Mar 6 00:56:17.142522 kernel: acpiphp: Slot [2] registered Mar 6 00:56:17.142541 kernel: acpiphp: Slot [3] registered Mar 6 00:56:17.142560 kernel: acpiphp: Slot [4] registered Mar 6 00:56:17.142578 kernel: acpiphp: Slot [5] registered Mar 6 00:56:17.142596 kernel: acpiphp: Slot [6] registered Mar 6 00:56:17.142614 kernel: acpiphp: Slot [7] registered Mar 6 00:56:17.142632 kernel: acpiphp: Slot [8] registered Mar 6 00:56:17.142650 kernel: acpiphp: Slot [9] registered Mar 6 00:56:17.142668 kernel: acpiphp: Slot [10] registered Mar 6 00:56:17.142691 kernel: acpiphp: Slot [11] registered Mar 6 00:56:17.142710 kernel: acpiphp: Slot [12] registered Mar 6 00:56:17.142728 kernel: acpiphp: Slot [13] registered Mar 6 00:56:17.142747 kernel: acpiphp: Slot [14] registered Mar 6 00:56:17.142766 kernel: acpiphp: Slot [15] registered Mar 6 00:56:17.142807 kernel: acpiphp: Slot [16] registered Mar 6 00:56:17.142828 kernel: acpiphp: Slot [17] registered Mar 6 00:56:17.142866 kernel: acpiphp: Slot [18] registered Mar 6 00:56:17.142891 kernel: acpiphp: Slot [19] registered Mar 6 00:56:17.142917 kernel: acpiphp: Slot [20] registered Mar 6 00:56:17.142936 kernel: acpiphp: Slot [21] registered Mar 6 00:56:17.142955 kernel: acpiphp: Slot [22] registered Mar 6 00:56:17.142974 kernel: acpiphp: Slot [23] registered Mar 6 00:56:17.142991 kernel: acpiphp: Slot [24] registered Mar 6 00:56:17.143010 kernel: acpiphp: Slot [25] registered Mar 6 00:56:17.143029 kernel: acpiphp: Slot [26] registered Mar 6 00:56:17.143047 kernel: acpiphp: Slot [27] registered Mar 6 00:56:17.143065 kernel: acpiphp: Slot [28] registered Mar 6 00:56:17.143083 kernel: acpiphp: Slot [29] registered Mar 6 00:56:17.143106 kernel: acpiphp: Slot [30] registered Mar 6 00:56:17.143125 kernel: acpiphp: Slot [31] registered Mar 6 00:56:17.143143 kernel: PCI host bridge to bus 0000:00 Mar 6 00:56:17.144475 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 6 00:56:17.144657 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 6 00:56:17.144829 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 6 00:56:17.145007 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 6 00:56:17.145421 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Mar 6 00:56:17.145676 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Mar 6 00:56:17.145886 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Mar 6 00:56:17.146111 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Mar 6 00:56:17.146352 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Mar 6 00:56:17.146549 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 6 00:56:17.146774 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Mar 6 00:56:17.147001 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Mar 6 00:56:17.147682 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Mar 6 00:56:17.151373 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Mar 6 00:56:17.151618 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 6 00:56:17.151815 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 6 00:56:17.151991 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 6 00:56:17.152305 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 6 00:56:17.152336 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 6 00:56:17.152357 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 6 00:56:17.152376 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 6 00:56:17.152395 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 6 00:56:17.152414 kernel: iommu: Default domain type: Translated Mar 6 00:56:17.152433 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 6 00:56:17.152452 kernel: efivars: Registered efivars operations Mar 6 00:56:17.152470 kernel: vgaarb: loaded Mar 6 00:56:17.152496 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 6 00:56:17.152515 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 00:56:17.152534 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 00:56:17.152553 kernel: pnp: PnP ACPI init Mar 6 00:56:17.153436 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 6 00:56:17.153484 kernel: pnp: PnP ACPI: found 1 devices Mar 6 00:56:17.153504 kernel: NET: Registered PF_INET protocol family Mar 6 00:56:17.153523 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 00:56:17.153553 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 00:56:17.153573 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 00:56:17.153592 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 00:56:17.153610 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 00:56:17.153630 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 00:56:17.153649 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 00:56:17.153669 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 00:56:17.153688 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 00:56:17.153707 kernel: PCI: CLS 0 bytes, default 64 Mar 6 00:56:17.153731 kernel: kvm [1]: HYP mode not available Mar 6 00:56:17.153750 kernel: Initialise system trusted keyrings Mar 6 00:56:17.153768 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 00:56:17.153787 kernel: Key type asymmetric registered Mar 6 00:56:17.153806 kernel: Asymmetric key parser 'x509' registered Mar 6 00:56:17.153824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 6 00:56:17.153842 kernel: io scheduler mq-deadline registered Mar 6 00:56:17.153860 kernel: io scheduler kyber registered Mar 6 00:56:17.153878 kernel: io scheduler bfq registered Mar 6 00:56:17.154115 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 6 00:56:17.155186 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 6 00:56:17.155235 kernel: ACPI: button: Power Button [PWRB] Mar 6 00:56:17.155256 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 6 00:56:17.155275 kernel: ACPI: button: Sleep Button [SLPB] Mar 6 00:56:17.155293 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 00:56:17.155313 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 6 00:56:17.155571 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 6 00:56:17.155611 kernel: printk: legacy console [ttyS0] disabled Mar 6 00:56:17.155632 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 6 00:56:17.155650 kernel: printk: legacy console [ttyS0] enabled Mar 6 00:56:17.155669 kernel: printk: legacy bootconsole [uart0] disabled Mar 6 00:56:17.155687 kernel: thunder_xcv, ver 1.0 Mar 6 00:56:17.155706 kernel: thunder_bgx, ver 1.0 Mar 6 00:56:17.155724 kernel: nicpf, ver 1.0 Mar 6 00:56:17.155742 kernel: nicvf, ver 1.0 Mar 6 00:56:17.155961 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 6 00:56:17.156177 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-06T00:56:16 UTC (1772758576) Mar 6 00:56:17.156206 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 6 00:56:17.156226 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Mar 6 00:56:17.156244 kernel: watchdog: NMI not fully supported Mar 6 00:56:17.156263 kernel: NET: Registered PF_INET6 protocol family Mar 6 00:56:17.156282 kernel: watchdog: Hard watchdog permanently disabled Mar 6 00:56:17.156301 kernel: Segment Routing with IPv6 Mar 6 00:56:17.156319 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 00:56:17.156338 kernel: NET: Registered PF_PACKET protocol family Mar 6 00:56:17.156364 kernel: Key type dns_resolver registered Mar 6 00:56:17.156383 kernel: registered taskstats version 1 Mar 6 00:56:17.156402 kernel: Loading compiled-in X.509 certificates Mar 6 00:56:17.156420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 3a2ba669b0bb3660035f2ce1faaa856d46d520ff' Mar 6 00:56:17.156438 kernel: Demotion targets for Node 0: null Mar 6 00:56:17.156457 kernel: Key type .fscrypt registered Mar 6 00:56:17.156475 kernel: Key type fscrypt-provisioning registered Mar 6 00:56:17.156493 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 00:56:17.156511 kernel: ima: Allocated hash algorithm: sha1 Mar 6 00:56:17.156533 kernel: ima: No architecture policies found Mar 6 00:56:17.156551 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 6 00:56:17.156568 kernel: clk: Disabling unused clocks Mar 6 00:56:17.156586 kernel: PM: genpd: Disabling unused power domains Mar 6 00:56:17.156604 kernel: Warning: unable to open an initial console. Mar 6 00:56:17.156622 kernel: Freeing unused kernel memory: 39552K Mar 6 00:56:17.156640 kernel: Run /init as init process Mar 6 00:56:17.156657 kernel: with arguments: Mar 6 00:56:17.156675 kernel: /init Mar 6 00:56:17.156696 kernel: with environment: Mar 6 00:56:17.156713 kernel: HOME=/ Mar 6 00:56:17.156731 kernel: TERM=linux Mar 6 00:56:17.156751 systemd[1]: Successfully made /usr/ read-only. Mar 6 00:56:17.156775 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 00:56:17.156795 systemd[1]: Detected virtualization amazon. Mar 6 00:56:17.156814 systemd[1]: Detected architecture arm64. Mar 6 00:56:17.156836 systemd[1]: Running in initrd. Mar 6 00:56:17.156855 systemd[1]: No hostname configured, using default hostname. Mar 6 00:56:17.156875 systemd[1]: Hostname set to . Mar 6 00:56:17.156894 systemd[1]: Initializing machine ID from VM UUID. Mar 6 00:56:17.156912 systemd[1]: Queued start job for default target initrd.target. Mar 6 00:56:17.156931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 00:56:17.156950 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 00:56:17.156970 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 00:56:17.156994 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 00:56:17.157013 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 00:56:17.157034 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 00:56:17.157055 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 00:56:17.157075 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 00:56:17.157094 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 00:56:17.157113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 00:56:17.157137 systemd[1]: Reached target paths.target - Path Units. Mar 6 00:56:17.158232 systemd[1]: Reached target slices.target - Slice Units. Mar 6 00:56:17.158265 systemd[1]: Reached target swap.target - Swaps. Mar 6 00:56:17.158287 systemd[1]: Reached target timers.target - Timer Units. Mar 6 00:56:17.158309 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 00:56:17.158330 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 00:56:17.158350 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 00:56:17.158371 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 00:56:17.158391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 00:56:17.158421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 00:56:17.158442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 00:56:17.158461 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 00:56:17.158481 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 00:56:17.158503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 00:56:17.158524 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 00:56:17.158546 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 00:56:17.158566 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 00:56:17.158593 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 00:56:17.158616 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 00:56:17.158636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:17.158656 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 00:56:17.158677 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 00:56:17.158702 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 00:56:17.158722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 00:56:17.158808 systemd-journald[258]: Collecting audit messages is disabled. Mar 6 00:56:17.158878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 00:56:17.158912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:17.158933 kernel: Bridge firewalling registered Mar 6 00:56:17.158954 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 00:56:17.158975 systemd-journald[258]: Journal started Mar 6 00:56:17.159014 systemd-journald[258]: Runtime Journal (/run/log/journal/ec29cf345b61ce461e6f4d742104eb2f) is 8M, max 75.3M, 67.3M free. Mar 6 00:56:17.090542 systemd-modules-load[259]: Inserted module 'overlay' Mar 6 00:56:17.152752 systemd-modules-load[259]: Inserted module 'br_netfilter' Mar 6 00:56:17.178378 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 00:56:17.187981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 00:56:17.188603 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 00:56:17.197818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 00:56:17.211623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 00:56:17.215602 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 00:56:17.256678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:56:17.270309 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 00:56:17.274833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 00:56:17.288447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 00:56:17.300502 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 00:56:17.303760 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 00:56:17.311330 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 00:56:17.366355 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=68c9ef230e3eed1360dd8114dada95b6a934f07952c3a5d42725f3006977f027 Mar 6 00:56:17.428437 systemd-resolved[299]: Positive Trust Anchors: Mar 6 00:56:17.428474 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 00:56:17.428536 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 00:56:17.560206 kernel: SCSI subsystem initialized Mar 6 00:56:17.569189 kernel: Loading iSCSI transport class v2.0-870. Mar 6 00:56:17.583191 kernel: iscsi: registered transport (tcp) Mar 6 00:56:17.608198 kernel: iscsi: registered transport (qla4xxx) Mar 6 00:56:17.608275 kernel: QLogic iSCSI HBA Driver Mar 6 00:56:17.647400 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 00:56:17.686768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 00:56:17.703854 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 00:56:17.733203 kernel: random: crng init done Mar 6 00:56:17.732680 systemd-resolved[299]: Defaulting to hostname 'linux'. Mar 6 00:56:17.736988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 00:56:17.737453 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 00:56:17.831194 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 00:56:17.839293 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 00:56:17.935217 kernel: raid6: neonx8 gen() 6544 MB/s Mar 6 00:56:17.952220 kernel: raid6: neonx4 gen() 6470 MB/s Mar 6 00:56:17.970228 kernel: raid6: neonx2 gen() 5383 MB/s Mar 6 00:56:17.987216 kernel: raid6: neonx1 gen() 3881 MB/s Mar 6 00:56:18.004213 kernel: raid6: int64x8 gen() 3637 MB/s Mar 6 00:56:18.021213 kernel: raid6: int64x4 gen() 3687 MB/s Mar 6 00:56:18.038217 kernel: raid6: int64x2 gen() 3584 MB/s Mar 6 00:56:18.056370 kernel: raid6: int64x1 gen() 2737 MB/s Mar 6 00:56:18.056454 kernel: raid6: using algorithm neonx8 gen() 6544 MB/s Mar 6 00:56:18.075266 kernel: raid6: .... xor() 4580 MB/s, rmw enabled Mar 6 00:56:18.075350 kernel: raid6: using neon recovery algorithm Mar 6 00:56:18.083200 kernel: xor: measuring software checksum speed Mar 6 00:56:18.085563 kernel: 8regs : 11832 MB/sec Mar 6 00:56:18.085604 kernel: 32regs : 13020 MB/sec Mar 6 00:56:18.086873 kernel: arm64_neon : 9181 MB/sec Mar 6 00:56:18.086915 kernel: xor: using function: 32regs (13020 MB/sec) Mar 6 00:56:18.183209 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 00:56:18.195701 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 00:56:18.210325 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 00:56:18.262665 systemd-udevd[508]: Using default interface naming scheme 'v255'. Mar 6 00:56:18.273491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 00:56:18.280388 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 00:56:18.329291 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Mar 6 00:56:18.378582 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 00:56:18.383398 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 00:56:18.508642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 00:56:18.524894 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 00:56:18.700900 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 6 00:56:18.700979 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 6 00:56:18.712407 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 6 00:56:18.712484 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 6 00:56:18.716722 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 6 00:56:18.717062 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 6 00:56:18.728712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 00:56:18.739395 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:5e:75:cc:05:8b Mar 6 00:56:18.739723 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 6 00:56:18.729072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:18.752393 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 00:56:18.752445 kernel: GPT:9289727 != 33554431 Mar 6 00:56:18.752471 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 00:56:18.733607 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:18.762357 kernel: GPT:9289727 != 33554431 Mar 6 00:56:18.762396 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 00:56:18.762422 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:18.753030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:18.771015 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 00:56:18.783737 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:56:18.825444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:18.849214 kernel: nvme nvme0: using unchecked data buffer Mar 6 00:56:18.997210 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 6 00:56:19.092328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 00:56:19.101231 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 00:56:19.136758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 6 00:56:19.166041 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 6 00:56:19.176013 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 6 00:56:19.179824 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 00:56:19.195005 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 00:56:19.206421 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 00:56:19.214249 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 00:56:19.228113 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 00:56:19.263934 disk-uuid[688]: Primary Header is updated. Mar 6 00:56:19.263934 disk-uuid[688]: Secondary Entries is updated. Mar 6 00:56:19.263934 disk-uuid[688]: Secondary Header is updated. Mar 6 00:56:19.285223 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:19.294746 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 00:56:19.309265 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:20.307213 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:20.312479 disk-uuid[690]: The operation has completed successfully. Mar 6 00:56:20.497079 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 00:56:20.500900 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 00:56:20.584991 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 00:56:20.627928 sh[955]: Success Mar 6 00:56:20.657921 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 00:56:20.658003 kernel: device-mapper: uevent: version 1.0.3 Mar 6 00:56:20.661195 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 00:56:20.674222 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 6 00:56:20.792725 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 00:56:20.799989 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 00:56:20.826551 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 00:56:20.851187 kernel: BTRFS: device fsid fcb4e7bf-1206-4803-90fb-6606b15e3aea devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (978) Mar 6 00:56:20.856702 kernel: BTRFS info (device dm-0): first mount of filesystem fcb4e7bf-1206-4803-90fb-6606b15e3aea Mar 6 00:56:20.856799 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:20.887125 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 6 00:56:20.887233 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 00:56:20.888706 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 00:56:20.904792 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 00:56:20.912868 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 00:56:20.919203 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 00:56:20.925880 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 00:56:20.934530 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 00:56:20.999211 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1011) Mar 6 00:56:21.004324 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:21.004404 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:21.025773 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 00:56:21.025850 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 00:56:21.035244 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:21.038144 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 00:56:21.049821 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 00:56:21.154336 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 00:56:21.165440 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 00:56:21.260587 systemd-networkd[1148]: lo: Link UP Mar 6 00:56:21.260612 systemd-networkd[1148]: lo: Gained carrier Mar 6 00:56:21.266316 systemd-networkd[1148]: Enumeration completed Mar 6 00:56:21.266496 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 00:56:21.272237 systemd[1]: Reached target network.target - Network. Mar 6 00:56:21.272916 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:21.272933 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 00:56:21.298465 systemd-networkd[1148]: eth0: Link UP Mar 6 00:56:21.298832 systemd-networkd[1148]: eth0: Gained carrier Mar 6 00:56:21.298929 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:21.326306 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.16.50/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 00:56:21.372407 ignition[1080]: Ignition 2.22.0 Mar 6 00:56:21.372438 ignition[1080]: Stage: fetch-offline Mar 6 00:56:21.373719 ignition[1080]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:21.373953 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:21.375550 ignition[1080]: Ignition finished successfully Mar 6 00:56:21.390248 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 00:56:21.400331 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 6 00:56:21.456017 ignition[1158]: Ignition 2.22.0 Mar 6 00:56:21.456579 ignition[1158]: Stage: fetch Mar 6 00:56:21.457169 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:21.457197 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:21.457319 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:21.475887 ignition[1158]: PUT result: OK Mar 6 00:56:21.485084 ignition[1158]: parsed url from cmdline: "" Mar 6 00:56:21.485108 ignition[1158]: no config URL provided Mar 6 00:56:21.485125 ignition[1158]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 00:56:21.485188 ignition[1158]: no config at "/usr/lib/ignition/user.ign" Mar 6 00:56:21.485229 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:21.494867 ignition[1158]: PUT result: OK Mar 6 00:56:21.494980 ignition[1158]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 6 00:56:21.505220 ignition[1158]: GET result: OK Mar 6 00:56:21.505437 ignition[1158]: parsing config with SHA512: dfa869d95496536545bb82b6772287f62c276ebe919321cbbc0b6fe2b80c60e8d1e377ff1e9440be3d10b2f8f07bf988c2b57e7e48d45b1fd0bb7fe4ca804ee2 Mar 6 00:56:21.522126 unknown[1158]: fetched base config from "system" Mar 6 00:56:21.522682 unknown[1158]: fetched base config from "system" Mar 6 00:56:21.523498 ignition[1158]: fetch: fetch complete Mar 6 00:56:21.522696 unknown[1158]: fetched user config from "aws" Mar 6 00:56:21.523511 ignition[1158]: fetch: fetch passed Mar 6 00:56:21.523624 ignition[1158]: Ignition finished successfully Mar 6 00:56:21.539133 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 6 00:56:21.545420 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 00:56:21.614356 ignition[1164]: Ignition 2.22.0 Mar 6 00:56:21.614391 ignition[1164]: Stage: kargs Mar 6 00:56:21.615126 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:21.615654 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:21.616791 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:21.627301 ignition[1164]: PUT result: OK Mar 6 00:56:21.634386 ignition[1164]: kargs: kargs passed Mar 6 00:56:21.634528 ignition[1164]: Ignition finished successfully Mar 6 00:56:21.643635 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 00:56:21.656054 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 00:56:21.719076 ignition[1170]: Ignition 2.22.0 Mar 6 00:56:21.719103 ignition[1170]: Stage: disks Mar 6 00:56:21.720430 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:21.720457 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:21.720609 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:21.732111 ignition[1170]: PUT result: OK Mar 6 00:56:21.737994 ignition[1170]: disks: disks passed Mar 6 00:56:21.738444 ignition[1170]: Ignition finished successfully Mar 6 00:56:21.746603 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 00:56:21.754080 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 00:56:21.760683 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 00:56:21.770608 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 00:56:21.774891 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 00:56:21.782895 systemd[1]: Reached target basic.target - Basic System. Mar 6 00:56:21.793914 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 00:56:21.868173 systemd-fsck[1178]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 6 00:56:21.875499 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 00:56:21.886534 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 00:56:22.051182 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f0884ab3-756d-49e8-9d95-af187b4f35fb r/w with ordered data mode. Quota mode: none. Mar 6 00:56:22.052417 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 00:56:22.055632 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 00:56:22.063722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 00:56:22.073600 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 00:56:22.077861 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 00:56:22.077963 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 00:56:22.078018 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 00:56:22.116555 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 00:56:22.124463 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 00:56:22.152202 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1197) Mar 6 00:56:22.160019 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:22.160094 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:22.172258 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 00:56:22.172366 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 00:56:22.176096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 00:56:22.249105 initrd-setup-root[1221]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 00:56:22.259121 initrd-setup-root[1228]: cut: /sysroot/etc/group: No such file or directory Mar 6 00:56:22.269991 initrd-setup-root[1235]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 00:56:22.281361 initrd-setup-root[1242]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 00:56:22.478452 systemd-networkd[1148]: eth0: Gained IPv6LL Mar 6 00:56:22.489446 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 00:56:22.498216 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 00:56:22.507363 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 00:56:22.534169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 00:56:22.537791 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:22.577225 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 00:56:22.601920 ignition[1311]: INFO : Ignition 2.22.0 Mar 6 00:56:22.601920 ignition[1311]: INFO : Stage: mount Mar 6 00:56:22.607331 ignition[1311]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:22.607331 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:22.607331 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:22.619044 ignition[1311]: INFO : PUT result: OK Mar 6 00:56:22.624571 ignition[1311]: INFO : mount: mount passed Mar 6 00:56:22.627132 ignition[1311]: INFO : Ignition finished successfully Mar 6 00:56:22.630497 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 00:56:22.638180 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 00:56:23.056521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 00:56:23.098213 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1322) Mar 6 00:56:23.103703 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:23.103772 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:23.110821 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 00:56:23.110910 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 00:56:23.114366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 00:56:23.172583 ignition[1339]: INFO : Ignition 2.22.0 Mar 6 00:56:23.172583 ignition[1339]: INFO : Stage: files Mar 6 00:56:23.177111 ignition[1339]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:23.177111 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:23.177111 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:23.188299 ignition[1339]: INFO : PUT result: OK Mar 6 00:56:23.193359 ignition[1339]: DEBUG : files: compiled without relabeling support, skipping Mar 6 00:56:23.197220 ignition[1339]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 00:56:23.197220 ignition[1339]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 00:56:23.210595 ignition[1339]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 00:56:23.214961 ignition[1339]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 00:56:23.219431 unknown[1339]: wrote ssh authorized keys file for user: core Mar 6 00:56:23.222300 ignition[1339]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 00:56:23.229999 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 6 00:56:23.235728 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 6 00:56:23.318764 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 00:56:23.470976 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 6 00:56:23.470976 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 00:56:23.482039 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 6 00:56:23.732800 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 6 00:56:23.886206 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 6 00:56:23.886206 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 6 00:56:23.897761 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 00:56:23.897761 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 00:56:23.897761 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 00:56:23.897761 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 00:56:23.897761 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 00:56:23.897761 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 00:56:23.897761 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 00:56:23.933043 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 00:56:23.933043 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 00:56:23.933043 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 6 00:56:23.933043 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 6 00:56:23.933043 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 6 00:56:23.933043 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 6 00:56:24.245405 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 6 00:56:24.714079 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 6 00:56:24.714079 ignition[1339]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 6 00:56:24.724018 ignition[1339]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 00:56:24.724018 ignition[1339]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 00:56:24.724018 ignition[1339]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 6 00:56:24.724018 ignition[1339]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 6 00:56:24.724018 ignition[1339]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 00:56:24.724018 ignition[1339]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 00:56:24.750243 ignition[1339]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 00:56:24.750243 ignition[1339]: INFO : files: files passed Mar 6 00:56:24.750243 ignition[1339]: INFO : Ignition finished successfully Mar 6 00:56:24.763578 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 00:56:24.770005 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 00:56:24.775770 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 00:56:24.803829 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 00:56:24.807465 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 00:56:24.824613 initrd-setup-root-after-ignition[1369]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 00:56:24.824613 initrd-setup-root-after-ignition[1369]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 00:56:24.835261 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 00:56:24.837381 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 00:56:24.852335 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 00:56:24.862451 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 00:56:24.944289 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 00:56:24.948322 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 00:56:24.952615 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 00:56:24.958333 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 00:56:24.962579 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 00:56:24.974686 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 00:56:25.034961 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 00:56:25.043269 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 00:56:25.101105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 00:56:25.105735 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 00:56:25.115332 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 00:56:25.118204 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 00:56:25.118895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 00:56:25.130127 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 00:56:25.136497 systemd[1]: Stopped target basic.target - Basic System. Mar 6 00:56:25.141880 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 00:56:25.145626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 00:56:25.155411 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 00:56:25.159598 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 00:56:25.168336 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 00:56:25.171475 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 00:56:25.180601 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 00:56:25.186435 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 00:56:25.189670 systemd[1]: Stopped target swap.target - Swaps. Mar 6 00:56:25.198200 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 00:56:25.198493 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 00:56:25.205628 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 00:56:25.216184 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 00:56:25.221602 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 00:56:25.228463 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 00:56:25.233071 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 00:56:25.233483 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 00:56:25.245599 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 00:56:25.245937 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 00:56:25.259055 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 00:56:25.259321 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 00:56:25.272820 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 00:56:25.279949 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 00:56:25.286413 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 00:56:25.291184 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 00:56:25.299352 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 00:56:25.303493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 00:56:25.326750 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 00:56:25.331143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 00:56:25.374202 ignition[1393]: INFO : Ignition 2.22.0 Mar 6 00:56:25.374202 ignition[1393]: INFO : Stage: umount Mar 6 00:56:25.374202 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:25.374202 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:25.386838 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:25.377008 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 00:56:25.396225 ignition[1393]: INFO : PUT result: OK Mar 6 00:56:25.401345 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 00:56:25.401913 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 00:56:25.426937 ignition[1393]: INFO : umount: umount passed Mar 6 00:56:25.430416 ignition[1393]: INFO : Ignition finished successfully Mar 6 00:56:25.436810 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 00:56:25.437356 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 00:56:25.445443 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 00:56:25.445579 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 00:56:25.450511 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 00:56:25.450621 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 00:56:25.459541 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 6 00:56:25.459646 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 6 00:56:25.468902 systemd[1]: Stopped target network.target - Network. Mar 6 00:56:25.477498 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 00:56:25.477630 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 00:56:25.481639 systemd[1]: Stopped target paths.target - Path Units. Mar 6 00:56:25.489652 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 00:56:25.495259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 00:56:25.499028 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 00:56:25.502136 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 00:56:25.512794 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 00:56:25.512881 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 00:56:25.521469 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 00:56:25.521555 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 00:56:25.525082 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 00:56:25.525232 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 00:56:25.534052 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 00:56:25.534198 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 00:56:25.534394 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 00:56:25.534483 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 00:56:25.543361 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 00:56:25.546636 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 00:56:25.588562 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 00:56:25.589077 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 00:56:25.603893 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 00:56:25.604592 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 00:56:25.604863 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 00:56:25.621336 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 00:56:25.622522 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 00:56:25.630117 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 00:56:25.630229 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 00:56:25.635488 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 00:56:25.638617 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 00:56:25.638747 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 00:56:25.646373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 00:56:25.646485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:56:25.664754 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 00:56:25.664863 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 00:56:25.675261 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 00:56:25.675373 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 00:56:25.683456 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 00:56:25.696627 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 00:56:25.696777 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 00:56:25.733813 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 00:56:25.734432 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 00:56:25.745773 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 00:56:25.746469 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 00:56:25.752403 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 00:56:25.752481 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 00:56:25.756512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 00:56:25.756629 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 00:56:25.770566 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 00:56:25.770683 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 00:56:25.780458 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 00:56:25.780577 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 00:56:25.791233 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 00:56:25.804426 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 00:56:25.811835 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 00:56:25.816759 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 00:56:25.816874 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 00:56:25.830918 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 00:56:25.831025 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 00:56:25.837123 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 00:56:25.837256 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 00:56:25.853904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 00:56:25.854022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:25.869505 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 6 00:56:25.869632 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 6 00:56:25.869711 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 6 00:56:25.869793 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 00:56:25.870789 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 00:56:25.873273 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 00:56:25.887256 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 00:56:25.887439 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 00:56:25.893704 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 00:56:25.902596 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 00:56:25.954008 systemd[1]: Switching root. Mar 6 00:56:26.003176 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Mar 6 00:56:26.003269 systemd-journald[258]: Journal stopped Mar 6 00:56:28.395388 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 00:56:28.395576 kernel: SELinux: policy capability open_perms=1 Mar 6 00:56:28.395668 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 00:56:28.395739 kernel: SELinux: policy capability always_check_network=0 Mar 6 00:56:28.395798 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 00:56:28.395848 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 00:56:28.395905 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 00:56:28.395953 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 00:56:28.395996 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 00:56:28.396045 kernel: audit: type=1403 audit(1772758586.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 00:56:28.396123 systemd[1]: Successfully loaded SELinux policy in 90.846ms. Mar 6 00:56:28.396273 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.525ms. Mar 6 00:56:28.396336 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 00:56:28.396379 systemd[1]: Detected virtualization amazon. Mar 6 00:56:28.396435 systemd[1]: Detected architecture arm64. Mar 6 00:56:28.396492 systemd[1]: Detected first boot. Mar 6 00:56:28.396541 systemd[1]: Initializing machine ID from VM UUID. Mar 6 00:56:28.396600 zram_generator::config[1440]: No configuration found. Mar 6 00:56:28.396658 kernel: NET: Registered PF_VSOCK protocol family Mar 6 00:56:28.396721 systemd[1]: Populated /etc with preset unit settings. Mar 6 00:56:28.396762 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 00:56:28.396818 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 00:56:28.396874 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 00:56:28.396924 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 00:56:28.396981 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 00:56:28.397039 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 00:56:28.397095 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 00:56:28.397136 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 00:56:28.397244 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 00:56:28.397306 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 00:56:28.397358 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 00:56:28.397413 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 00:56:28.397468 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 00:56:28.397515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 00:56:28.397562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 00:56:28.397623 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 00:56:28.397666 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 00:56:28.397704 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 00:56:28.397740 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 00:56:28.397773 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 00:56:28.397806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 00:56:28.397843 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 00:56:28.397877 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 00:56:28.397907 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 00:56:28.397943 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 00:56:28.397974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 00:56:28.398009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 00:56:28.398043 systemd[1]: Reached target slices.target - Slice Units. Mar 6 00:56:28.398075 systemd[1]: Reached target swap.target - Swaps. Mar 6 00:56:28.398106 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 00:56:28.398136 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 00:56:28.398206 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 00:56:28.398245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 00:56:28.398286 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 00:56:28.398316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 00:56:28.398345 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 00:56:28.398374 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 00:56:28.398403 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 00:56:28.398434 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 00:56:28.398467 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 00:56:28.398496 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 00:56:28.398525 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 00:56:28.398564 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 00:56:28.398596 systemd[1]: Reached target machines.target - Containers. Mar 6 00:56:28.398626 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 00:56:28.398659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:28.398688 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 00:56:28.398717 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 00:56:28.398746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 00:56:28.398782 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 00:56:28.398839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 00:56:28.398874 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 00:56:28.398904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 00:56:28.398934 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 00:56:28.398970 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 00:56:28.399014 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 00:56:28.399048 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 00:56:28.399080 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 00:56:28.399112 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:28.399191 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 00:56:28.399232 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 00:56:28.399263 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 00:56:28.399296 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 00:56:28.399326 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 00:56:28.399355 kernel: loop: module loaded Mar 6 00:56:28.399392 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 00:56:28.399428 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 00:56:28.399460 systemd[1]: Stopped verity-setup.service. Mar 6 00:56:28.399490 kernel: fuse: init (API version 7.41) Mar 6 00:56:28.399525 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 00:56:28.400283 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 00:56:28.400352 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 00:56:28.400384 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 00:56:28.400415 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 00:56:28.400447 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 00:56:28.400484 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 00:56:28.400517 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 00:56:28.400548 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 00:56:28.400591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 00:56:28.400622 kernel: ACPI: bus type drm_connector registered Mar 6 00:56:28.400656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 00:56:28.400688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 00:56:28.400719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 00:56:28.400754 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 00:56:28.400785 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 00:56:28.400815 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 00:56:28.400844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 00:56:28.400879 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 00:56:28.400912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 00:56:28.400945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 00:56:28.400976 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 00:56:28.401007 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 00:56:28.401036 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 00:56:28.401065 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 00:56:28.401098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 00:56:28.402380 systemd-journald[1523]: Collecting audit messages is disabled. Mar 6 00:56:28.402488 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 00:56:28.402525 systemd-journald[1523]: Journal started Mar 6 00:56:28.402576 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec29cf345b61ce461e6f4d742104eb2f) is 8M, max 75.3M, 67.3M free. Mar 6 00:56:27.603464 systemd[1]: Queued start job for default target multi-user.target. Mar 6 00:56:27.620390 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 6 00:56:27.621333 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 00:56:28.406205 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 00:56:28.429742 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 00:56:28.442908 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 00:56:28.443036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:28.456638 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 00:56:28.456781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 00:56:28.474793 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 00:56:28.480786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 00:56:28.491203 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 00:56:28.504144 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 00:56:28.521341 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 00:56:28.521445 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 00:56:28.533795 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 00:56:28.540252 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 00:56:28.545716 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 00:56:28.554689 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 00:56:28.589287 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 00:56:28.636465 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 00:56:28.643810 kernel: loop0: detected capacity change from 0 to 200864 Mar 6 00:56:28.650581 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 00:56:28.666726 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 00:56:28.707080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:56:28.735341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 00:56:28.737314 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Mar 6 00:56:28.737360 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Mar 6 00:56:28.753848 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 00:56:28.755985 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 00:56:28.772201 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 00:56:28.789786 kernel: loop1: detected capacity change from 0 to 61264 Mar 6 00:56:28.790144 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec29cf345b61ce461e6f4d742104eb2f is 51.470ms for 944 entries. Mar 6 00:56:28.790144 systemd-journald[1523]: System Journal (/var/log/journal/ec29cf345b61ce461e6f4d742104eb2f) is 8M, max 195.6M, 187.6M free. Mar 6 00:56:28.859457 systemd-journald[1523]: Received client request to flush runtime journal. Mar 6 00:56:28.788044 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 00:56:28.792828 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 00:56:28.871256 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 00:56:28.909961 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 00:56:28.924630 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 00:56:28.940824 kernel: loop2: detected capacity change from 0 to 100632 Mar 6 00:56:29.010209 kernel: loop3: detected capacity change from 0 to 119840 Mar 6 00:56:29.020348 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Mar 6 00:56:29.020393 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Mar 6 00:56:29.037949 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 00:56:29.080253 kernel: loop4: detected capacity change from 0 to 200864 Mar 6 00:56:29.118229 kernel: loop5: detected capacity change from 0 to 61264 Mar 6 00:56:29.152338 kernel: loop6: detected capacity change from 0 to 100632 Mar 6 00:56:29.204244 kernel: loop7: detected capacity change from 0 to 119840 Mar 6 00:56:29.261898 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 6 00:56:29.272227 (sd-merge)[1603]: Merged extensions into '/usr'. Mar 6 00:56:29.289822 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 00:56:29.290275 systemd[1]: Reloading... Mar 6 00:56:29.603111 zram_generator::config[1629]: No configuration found. Mar 6 00:56:29.765295 ldconfig[1551]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 00:56:30.080577 systemd[1]: Reloading finished in 788 ms. Mar 6 00:56:30.102144 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 00:56:30.108672 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 00:56:30.114238 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 00:56:30.139846 systemd[1]: Starting ensure-sysext.service... Mar 6 00:56:30.145677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 00:56:30.158668 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 00:56:30.211267 systemd[1]: Reload requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Mar 6 00:56:30.211311 systemd[1]: Reloading... Mar 6 00:56:30.217880 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 00:56:30.218651 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 00:56:30.219623 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 00:56:30.220562 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 00:56:30.223819 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 00:56:30.225102 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Mar 6 00:56:30.226067 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Mar 6 00:56:30.237904 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 00:56:30.238103 systemd-tmpfiles[1683]: Skipping /boot Mar 6 00:56:30.267335 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 00:56:30.267529 systemd-tmpfiles[1683]: Skipping /boot Mar 6 00:56:30.337800 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Mar 6 00:56:30.469279 zram_generator::config[1711]: No configuration found. Mar 6 00:56:30.804434 (udev-worker)[1734]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:56:31.177527 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 00:56:31.178336 systemd[1]: Reloading finished in 966 ms. Mar 6 00:56:31.228234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 00:56:31.236260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 00:56:31.336974 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 00:56:31.347572 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 00:56:31.362677 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 00:56:31.388077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 00:56:31.400699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 00:56:31.408608 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 00:56:31.428502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:31.435927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 00:56:31.446741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 00:56:31.464677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 00:56:31.472923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:31.473288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:31.476858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 00:56:31.479839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 00:56:31.501110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:31.508788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 00:56:31.515475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:31.516494 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:31.529788 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 00:56:31.548610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:31.561761 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 00:56:31.567559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:31.567852 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:31.569348 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 00:56:31.597417 systemd[1]: Finished ensure-sysext.service. Mar 6 00:56:31.605399 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 00:56:31.617413 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 00:56:31.649108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 00:56:31.692130 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 00:56:31.700076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 00:56:31.700585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 00:56:31.745387 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 00:56:31.752990 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 00:56:31.833879 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 00:56:31.834630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 00:56:31.845746 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 00:56:31.846315 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 00:56:31.855369 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 00:56:31.871005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 00:56:31.872138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 00:56:31.902275 augenrules[1939]: No rules Mar 6 00:56:31.909333 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 00:56:31.910000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 00:56:31.943136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 00:56:31.958234 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 00:56:31.966466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 00:56:31.970557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:32.048303 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 00:56:32.114753 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 00:56:32.205541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:32.289918 systemd-networkd[1895]: lo: Link UP Mar 6 00:56:32.289944 systemd-networkd[1895]: lo: Gained carrier Mar 6 00:56:32.293550 systemd-networkd[1895]: Enumeration completed Mar 6 00:56:32.293822 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 00:56:32.294778 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:32.294787 systemd-networkd[1895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 00:56:32.302859 systemd-networkd[1895]: eth0: Link UP Mar 6 00:56:32.303560 systemd-networkd[1895]: eth0: Gained carrier Mar 6 00:56:32.303730 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:32.315374 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 00:56:32.316527 systemd-networkd[1895]: eth0: DHCPv4 address 172.31.16.50/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 00:56:32.325693 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 00:56:32.352307 systemd-resolved[1897]: Positive Trust Anchors: Mar 6 00:56:32.352367 systemd-resolved[1897]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 00:56:32.352470 systemd-resolved[1897]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 00:56:32.371500 systemd-resolved[1897]: Defaulting to hostname 'linux'. Mar 6 00:56:32.373284 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 00:56:32.378135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 00:56:32.381858 systemd[1]: Reached target network.target - Network. Mar 6 00:56:32.385312 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 00:56:32.388788 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 00:56:32.392517 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 00:56:32.396230 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 00:56:32.400406 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 00:56:32.403769 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 00:56:32.407445 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 00:56:32.411047 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 00:56:32.411131 systemd[1]: Reached target paths.target - Path Units. Mar 6 00:56:32.413919 systemd[1]: Reached target timers.target - Timer Units. Mar 6 00:56:32.419732 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 00:56:32.426245 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 00:56:32.433799 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 00:56:32.438062 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 00:56:32.441800 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 00:56:32.452446 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 00:56:32.456311 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 00:56:32.460778 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 00:56:32.464560 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 00:56:32.467610 systemd[1]: Reached target basic.target - Basic System. Mar 6 00:56:32.470308 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 00:56:32.470373 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 00:56:32.475433 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 00:56:32.484816 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 6 00:56:32.494605 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 00:56:32.509365 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 00:56:32.520033 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 00:56:32.532419 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 00:56:32.536625 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 00:56:32.548722 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 00:56:32.556493 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 00:56:32.571569 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 00:56:32.584608 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 6 00:56:32.592293 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 00:56:32.614105 jq[1972]: false Mar 6 00:56:32.612603 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 00:56:32.628806 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 00:56:32.636092 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 00:56:32.637459 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 00:56:32.646957 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 00:56:32.665516 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 00:56:32.676947 extend-filesystems[1973]: Found /dev/nvme0n1p6 Mar 6 00:56:32.685383 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 00:56:32.690436 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 00:56:32.692566 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 00:56:32.699890 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 00:56:32.701830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 00:56:32.747478 extend-filesystems[1973]: Found /dev/nvme0n1p9 Mar 6 00:56:32.783299 extend-filesystems[1973]: Checking size of /dev/nvme0n1p9 Mar 6 00:56:32.799672 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 00:56:32.827567 update_engine[1986]: I20260306 00:56:32.826115 1986 main.cc:92] Flatcar Update Engine starting Mar 6 00:56:32.833213 jq[1987]: true Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: ---------------------------------------------------- Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: corporation. Support and training for ntp-4 are Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: available at https://www.nwtime.org/support Mar 6 00:56:32.833738 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: ---------------------------------------------------- Mar 6 00:56:32.831566 ntpd[1975]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:32.831697 ntpd[1975]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:32.831717 ntpd[1975]: ---------------------------------------------------- Mar 6 00:56:32.831734 ntpd[1975]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:32.831750 ntpd[1975]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:32.831767 ntpd[1975]: corporation. Support and training for ntp-4 are Mar 6 00:56:32.831783 ntpd[1975]: available at https://www.nwtime.org/support Mar 6 00:56:32.831799 ntpd[1975]: ---------------------------------------------------- Mar 6 00:56:32.858370 ntpd[1975]: proto: precision = 0.096 usec (-23) Mar 6 00:56:32.858790 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: proto: precision = 0.096 usec (-23) Mar 6 00:56:32.858920 ntpd[1975]: basedate set to 2026-02-21 Mar 6 00:56:32.862447 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: basedate set to 2026-02-21 Mar 6 00:56:32.862447 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:32.858965 ntpd[1975]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:32.868933 ntpd[1975]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:32.871709 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 00:56:32.871964 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:32.871964 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:32.869047 ntpd[1975]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:32.873286 ntpd[1975]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:32.888357 extend-filesystems[1973]: Resized partition /dev/nvme0n1p9 Mar 6 00:56:32.891628 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:32.891628 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: Listen normally on 3 eth0 172.31.16.50:123 Mar 6 00:56:32.891628 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:32.891628 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: bind(21) AF_INET6 [fe80::45e:75ff:fecc:58b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 00:56:32.891628 ntpd[1975]: 6 Mar 00:56:32 ntpd[1975]: unable to create socket on eth0 (5) for [fe80::45e:75ff:fecc:58b%2]:123 Mar 6 00:56:32.873418 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 00:56:32.873393 ntpd[1975]: Listen normally on 3 eth0 172.31.16.50:123 Mar 6 00:56:32.898060 systemd-coredump[2022]: Process 1975 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 6 00:56:32.873450 ntpd[1975]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:32.873506 ntpd[1975]: bind(21) AF_INET6 [fe80::45e:75ff:fecc:58b%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 00:56:32.873551 ntpd[1975]: unable to create socket on eth0 (5) for [fe80::45e:75ff:fecc:58b%2]:123 Mar 6 00:56:32.916198 extend-filesystems[2023]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 00:56:32.907414 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 6 00:56:32.905448 dbus-daemon[1970]: [system] SELinux support is enabled Mar 6 00:56:32.911815 dbus-daemon[1970]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1895 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 6 00:56:32.926260 systemd[1]: Started systemd-coredump@0-2022-0.service - Process Core Dump (PID 2022/UID 0). Mar 6 00:56:32.931350 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 00:56:32.940919 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 00:56:32.941001 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 00:56:32.948802 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 00:56:32.948905 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 00:56:32.956098 tar[1993]: linux-arm64/LICENSE Mar 6 00:56:32.956734 tar[1993]: linux-arm64/helm Mar 6 00:56:32.968982 dbus-daemon[1970]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 00:56:32.971348 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 6 00:56:32.973363 systemd[1]: Started update-engine.service - Update Engine. Mar 6 00:56:32.975318 update_engine[1986]: I20260306 00:56:32.974782 1986 update_check_scheduler.cc:74] Next update check in 9m18s Mar 6 00:56:32.988842 (ntainerd)[2014]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 00:56:32.998073 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 6 00:56:33.031056 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 00:56:33.038849 jq[2017]: true Mar 6 00:56:33.143114 systemd-logind[1985]: Watching system buttons on /dev/input/event0 (Power Button) Mar 6 00:56:33.143221 systemd-logind[1985]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 6 00:56:33.144672 systemd-logind[1985]: New seat seat0. Mar 6 00:56:33.147314 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 00:56:33.174862 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 6 00:56:33.184190 coreos-metadata[1969]: Mar 06 00:56:33.180 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 00:56:33.196622 coreos-metadata[1969]: Mar 06 00:56:33.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 6 00:56:33.199444 coreos-metadata[1969]: Mar 06 00:56:33.199 INFO Fetch successful Mar 6 00:56:33.199444 coreos-metadata[1969]: Mar 06 00:56:33.199 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 6 00:56:33.200910 coreos-metadata[1969]: Mar 06 00:56:33.200 INFO Fetch successful Mar 6 00:56:33.200910 coreos-metadata[1969]: Mar 06 00:56:33.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 6 00:56:33.203644 coreos-metadata[1969]: Mar 06 00:56:33.203 INFO Fetch successful Mar 6 00:56:33.203644 coreos-metadata[1969]: Mar 06 00:56:33.203 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 6 00:56:33.206217 coreos-metadata[1969]: Mar 06 00:56:33.205 INFO Fetch successful Mar 6 00:56:33.206217 coreos-metadata[1969]: Mar 06 00:56:33.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 6 00:56:33.212546 coreos-metadata[1969]: Mar 06 00:56:33.211 INFO Fetch failed with 404: resource not found Mar 6 00:56:33.212546 coreos-metadata[1969]: Mar 06 00:56:33.211 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 6 00:56:33.215428 coreos-metadata[1969]: Mar 06 00:56:33.215 INFO Fetch successful Mar 6 00:56:33.215428 coreos-metadata[1969]: Mar 06 00:56:33.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 6 00:56:33.226786 coreos-metadata[1969]: Mar 06 00:56:33.225 INFO Fetch successful Mar 6 00:56:33.226786 coreos-metadata[1969]: Mar 06 00:56:33.225 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 6 00:56:33.232372 coreos-metadata[1969]: Mar 06 00:56:33.230 INFO Fetch successful Mar 6 00:56:33.232372 coreos-metadata[1969]: Mar 06 00:56:33.230 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 6 00:56:33.232372 coreos-metadata[1969]: Mar 06 00:56:33.231 INFO Fetch successful Mar 6 00:56:33.232372 coreos-metadata[1969]: Mar 06 00:56:33.231 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 6 00:56:33.242038 coreos-metadata[1969]: Mar 06 00:56:33.241 INFO Fetch successful Mar 6 00:56:33.322424 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 6 00:56:33.347545 extend-filesystems[2023]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 6 00:56:33.347545 extend-filesystems[2023]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 6 00:56:33.347545 extend-filesystems[2023]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 6 00:56:33.365418 extend-filesystems[1973]: Resized filesystem in /dev/nvme0n1p9 Mar 6 00:56:33.362829 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 00:56:33.394839 bash[2055]: Updated "/home/core/.ssh/authorized_keys" Mar 6 00:56:33.363574 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 00:56:33.380471 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 00:56:33.435817 systemd[1]: Starting sshkeys.service... Mar 6 00:56:33.507616 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 6 00:56:33.511934 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 00:56:33.560824 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 6 00:56:33.571509 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 6 00:56:33.916245 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 6 00:56:33.954954 dbus-daemon[1970]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 6 00:56:33.962693 containerd[2014]: time="2026-03-06T00:56:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 00:56:33.968083 containerd[2014]: time="2026-03-06T00:56:33.966576374Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 00:56:33.991552 dbus-daemon[1970]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 6 00:56:34.012968 systemd[1]: Starting polkit.service - Authorization Manager... Mar 6 00:56:34.076336 coreos-metadata[2091]: Mar 06 00:56:34.073 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 00:56:34.080531 coreos-metadata[2091]: Mar 06 00:56:34.080 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 6 00:56:34.082053 coreos-metadata[2091]: Mar 06 00:56:34.081 INFO Fetch successful Mar 6 00:56:34.082053 coreos-metadata[2091]: Mar 06 00:56:34.082 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 6 00:56:34.093115 coreos-metadata[2091]: Mar 06 00:56:34.089 INFO Fetch successful Mar 6 00:56:34.099412 unknown[2091]: wrote ssh authorized keys file for user: core Mar 6 00:56:34.099995 containerd[2014]: time="2026-03-06T00:56:34.099426395Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.932µs" Mar 6 00:56:34.099995 containerd[2014]: time="2026-03-06T00:56:34.099476327Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 00:56:34.099995 containerd[2014]: time="2026-03-06T00:56:34.099515951Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 00:56:34.104244 containerd[2014]: time="2026-03-06T00:56:34.101515187Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 00:56:34.104244 containerd[2014]: time="2026-03-06T00:56:34.101666579Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 00:56:34.104244 containerd[2014]: time="2026-03-06T00:56:34.101833931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 00:56:34.104503 containerd[2014]: time="2026-03-06T00:56:34.104388263Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 00:56:34.104503 containerd[2014]: time="2026-03-06T00:56:34.104441903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.105135911Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.105235751Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.105271799Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.105294287Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.105529607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.105959447Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.106029287Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 00:56:34.107459 containerd[2014]: time="2026-03-06T00:56:34.106073411Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 00:56:34.116490 containerd[2014]: time="2026-03-06T00:56:34.112234391Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 00:56:34.116490 containerd[2014]: time="2026-03-06T00:56:34.113336939Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 00:56:34.116490 containerd[2014]: time="2026-03-06T00:56:34.113669939Z" level=info msg="metadata content store policy set" policy=shared Mar 6 00:56:34.130611 containerd[2014]: time="2026-03-06T00:56:34.130533503Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 00:56:34.132389 containerd[2014]: time="2026-03-06T00:56:34.132235703Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 00:56:34.132389 containerd[2014]: time="2026-03-06T00:56:34.132383135Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 00:56:34.132630 containerd[2014]: time="2026-03-06T00:56:34.132426431Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 00:56:34.132630 containerd[2014]: time="2026-03-06T00:56:34.132460775Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 00:56:34.132630 containerd[2014]: time="2026-03-06T00:56:34.132492263Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 00:56:34.132630 containerd[2014]: time="2026-03-06T00:56:34.132527711Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 00:56:34.132630 containerd[2014]: time="2026-03-06T00:56:34.132560711Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 00:56:34.132630 containerd[2014]: time="2026-03-06T00:56:34.132596603Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 00:56:34.132630 containerd[2014]: time="2026-03-06T00:56:34.132626183Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 00:56:34.132960 containerd[2014]: time="2026-03-06T00:56:34.132653615Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 00:56:34.132960 containerd[2014]: time="2026-03-06T00:56:34.132688727Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 00:56:34.133072 containerd[2014]: time="2026-03-06T00:56:34.132965687Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 00:56:34.133072 containerd[2014]: time="2026-03-06T00:56:34.133016855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 00:56:34.133072 containerd[2014]: time="2026-03-06T00:56:34.133052999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 00:56:34.133272 containerd[2014]: time="2026-03-06T00:56:34.133098755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 00:56:34.147233 containerd[2014]: time="2026-03-06T00:56:34.133132763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 00:56:34.147233 containerd[2014]: time="2026-03-06T00:56:34.143309807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 00:56:34.147233 containerd[2014]: time="2026-03-06T00:56:34.143368031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 00:56:34.147233 containerd[2014]: time="2026-03-06T00:56:34.143404727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 00:56:34.147233 containerd[2014]: time="2026-03-06T00:56:34.143438819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 00:56:34.147233 containerd[2014]: time="2026-03-06T00:56:34.143471387Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 00:56:34.147233 containerd[2014]: time="2026-03-06T00:56:34.143503115Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 00:56:34.159666 containerd[2014]: time="2026-03-06T00:56:34.157268015Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 00:56:34.159666 containerd[2014]: time="2026-03-06T00:56:34.157390115Z" level=info msg="Start snapshots syncer" Mar 6 00:56:34.159666 containerd[2014]: time="2026-03-06T00:56:34.157479839Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.158079443Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.158252651Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.158395403Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.158758631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.158859263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.158913011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.158960291Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.159005291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.159047087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 00:56:34.159951 containerd[2014]: time="2026-03-06T00:56:34.159080423Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 00:56:34.183211 containerd[2014]: time="2026-03-06T00:56:34.180983915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 00:56:34.183211 containerd[2014]: time="2026-03-06T00:56:34.181098683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186440891Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186564599Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186605735Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186630995Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186657047Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186680855Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186705983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186737111Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186948095Z" level=info msg="runtime interface created" Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.186970163Z" level=info msg="created NRI interface" Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.187002083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 00:56:34.187087 containerd[2014]: time="2026-03-06T00:56:34.187047983Z" level=info msg="Connect containerd service" Mar 6 00:56:34.187809 containerd[2014]: time="2026-03-06T00:56:34.187120511Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 00:56:34.201535 containerd[2014]: time="2026-03-06T00:56:34.197844155Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 00:56:34.319712 systemd-networkd[1895]: eth0: Gained IPv6LL Mar 6 00:56:34.334757 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 00:56:34.340889 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 00:56:34.350197 update-ssh-keys[2151]: Updated "/home/core/.ssh/authorized_keys" Mar 6 00:56:34.356439 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 6 00:56:34.366023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:56:34.376721 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 00:56:34.381454 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 6 00:56:34.424346 systemd[1]: Finished sshkeys.service. Mar 6 00:56:34.475871 locksmithd[2028]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 00:56:34.525341 systemd-coredump[2025]: Process 1975 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1975: #0 0x0000aaaadbb90b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaadbb3fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaadbb40240 n/a (ntpd + 0x10240) #3 0x0000aaaadbb3be14 n/a (ntpd + 0xbe14) #4 0x0000aaaadbb3d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaadbb45a38 n/a (ntpd + 0x15a38) #6 0x0000aaaadbb3738c n/a (ntpd + 0x738c) #7 0x0000ffff81d92034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff81d92118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaadbb373f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Mar 6 00:56:34.547685 systemd[1]: systemd-coredump@0-2022-0.service: Deactivated successfully. Mar 6 00:56:34.560658 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 6 00:56:34.561051 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 6 00:56:34.574092 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 00:56:34.781108 containerd[2014]: time="2026-03-06T00:56:34.780931130Z" level=info msg="Start subscribing containerd event" Mar 6 00:56:34.782948 containerd[2014]: time="2026-03-06T00:56:34.782725814Z" level=info msg="Start recovering state" Mar 6 00:56:34.783112 containerd[2014]: time="2026-03-06T00:56:34.782956586Z" level=info msg="Start event monitor" Mar 6 00:56:34.783112 containerd[2014]: time="2026-03-06T00:56:34.782995022Z" level=info msg="Start cni network conf syncer for default" Mar 6 00:56:34.783112 containerd[2014]: time="2026-03-06T00:56:34.783016982Z" level=info msg="Start streaming server" Mar 6 00:56:34.783112 containerd[2014]: time="2026-03-06T00:56:34.783077258Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 00:56:34.783112 containerd[2014]: time="2026-03-06T00:56:34.783099170Z" level=info msg="runtime interface starting up..." Mar 6 00:56:34.783437 containerd[2014]: time="2026-03-06T00:56:34.783116438Z" level=info msg="starting plugins..." Mar 6 00:56:34.783437 containerd[2014]: time="2026-03-06T00:56:34.783204410Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 00:56:34.783437 containerd[2014]: time="2026-03-06T00:56:34.782667518Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 00:56:34.783583 containerd[2014]: time="2026-03-06T00:56:34.783530870Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 00:56:34.783779 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 00:56:34.790625 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 6 00:56:34.794499 containerd[2014]: time="2026-03-06T00:56:34.793468262Z" level=info msg="containerd successfully booted in 0.835940s" Mar 6 00:56:34.804331 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 00:56:34.836027 amazon-ssm-agent[2169]: Initializing new seelog logger Mar 6 00:56:34.837145 amazon-ssm-agent[2169]: New Seelog Logger Creation Complete Mar 6 00:56:34.837145 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.837145 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.837145 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 processing appconfig overrides Mar 6 00:56:34.845322 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.845322 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.845322 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 processing appconfig overrides Mar 6 00:56:34.845322 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.845322 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.845322 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 processing appconfig overrides Mar 6 00:56:34.848538 amazon-ssm-agent[2169]: 2026-03-06 00:56:34.8442 INFO Proxy environment variables: Mar 6 00:56:34.856519 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.856665 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:34.856938 amazon-ssm-agent[2169]: 2026/03/06 00:56:34 processing appconfig overrides Mar 6 00:56:34.888599 ntpd[2211]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: ---------------------------------------------------- Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: corporation. Support and training for ntp-4 are Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: available at https://www.nwtime.org/support Mar 6 00:56:34.889696 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: ---------------------------------------------------- Mar 6 00:56:34.888718 ntpd[2211]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:34.888738 ntpd[2211]: ---------------------------------------------------- Mar 6 00:56:34.888755 ntpd[2211]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:34.888771 ntpd[2211]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:34.888789 ntpd[2211]: corporation. Support and training for ntp-4 are Mar 6 00:56:34.888805 ntpd[2211]: available at https://www.nwtime.org/support Mar 6 00:56:34.894381 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: proto: precision = 0.096 usec (-23) Mar 6 00:56:34.894381 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: basedate set to 2026-02-21 Mar 6 00:56:34.894381 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:34.894381 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:34.894381 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:34.888822 ntpd[2211]: ---------------------------------------------------- Mar 6 00:56:34.893463 ntpd[2211]: proto: precision = 0.096 usec (-23) Mar 6 00:56:34.893795 ntpd[2211]: basedate set to 2026-02-21 Mar 6 00:56:34.893815 ntpd[2211]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:34.893959 ntpd[2211]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:34.894001 ntpd[2211]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:34.895171 ntpd[2211]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:34.895715 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:34.895715 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Listen normally on 3 eth0 172.31.16.50:123 Mar 6 00:56:34.895715 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:34.895715 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Listen normally on 5 eth0 [fe80::45e:75ff:fecc:58b%2]:123 Mar 6 00:56:34.895715 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: Listening on routing socket on fd #22 for interface updates Mar 6 00:56:34.895235 ntpd[2211]: Listen normally on 3 eth0 172.31.16.50:123 Mar 6 00:56:34.895286 ntpd[2211]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:34.895330 ntpd[2211]: Listen normally on 5 eth0 [fe80::45e:75ff:fecc:58b%2]:123 Mar 6 00:56:34.895374 ntpd[2211]: Listening on routing socket on fd #22 for interface updates Mar 6 00:56:34.910207 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:34.910444 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:34.910540 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:34.910649 ntpd[2211]: 6 Mar 00:56:34 ntpd[2211]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:34.949633 amazon-ssm-agent[2169]: 2026-03-06 00:56:34.8442 INFO https_proxy: Mar 6 00:56:34.973966 polkitd[2137]: Started polkitd version 126 Mar 6 00:56:35.000130 polkitd[2137]: Loading rules from directory /etc/polkit-1/rules.d Mar 6 00:56:35.002937 polkitd[2137]: Loading rules from directory /run/polkit-1/rules.d Mar 6 00:56:35.004029 polkitd[2137]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 00:56:35.007082 polkitd[2137]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 6 00:56:35.007389 polkitd[2137]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 00:56:35.007736 polkitd[2137]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 6 00:56:35.012973 polkitd[2137]: Finished loading, compiling and executing 2 rules Mar 6 00:56:35.015561 systemd[1]: Started polkit.service - Authorization Manager. Mar 6 00:56:35.022943 dbus-daemon[1970]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 6 00:56:35.024240 polkitd[2137]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 6 00:56:35.051134 amazon-ssm-agent[2169]: 2026-03-06 00:56:34.8442 INFO http_proxy: Mar 6 00:56:35.083960 systemd-hostnamed[2027]: Hostname set to (transient) Mar 6 00:56:35.084140 systemd-resolved[1897]: System hostname changed to 'ip-172-31-16-50'. Mar 6 00:56:35.151052 amazon-ssm-agent[2169]: 2026-03-06 00:56:34.8442 INFO no_proxy: Mar 6 00:56:35.253259 amazon-ssm-agent[2169]: 2026-03-06 00:56:34.8445 INFO Checking if agent identity type OnPrem can be assumed Mar 6 00:56:35.351990 amazon-ssm-agent[2169]: 2026-03-06 00:56:34.8446 INFO Checking if agent identity type EC2 can be assumed Mar 6 00:56:35.432314 tar[1993]: linux-arm64/README.md Mar 6 00:56:35.452334 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1126 INFO Agent will take identity from EC2 Mar 6 00:56:35.473103 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 00:56:35.551650 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1164 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 6 00:56:35.651013 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1164 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 6 00:56:35.750260 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1164 INFO [amazon-ssm-agent] Starting Core Agent Mar 6 00:56:35.849704 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1164 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 6 00:56:35.949875 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1165 INFO [Registrar] Starting registrar module Mar 6 00:56:36.050178 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1212 INFO [EC2Identity] Checking disk for registration info Mar 6 00:56:36.151174 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1213 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 6 00:56:36.252336 amazon-ssm-agent[2169]: 2026-03-06 00:56:35.1213 INFO [EC2Identity] Generating registration keypair Mar 6 00:56:36.672333 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.6720 INFO [EC2Identity] Checking write access before registering Mar 6 00:56:36.717800 amazon-ssm-agent[2169]: 2026/03/06 00:56:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:36.717800 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:36.717956 amazon-ssm-agent[2169]: 2026/03/06 00:56:36 processing appconfig overrides Mar 6 00:56:36.749221 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.6728 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 6 00:56:36.749221 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.7175 INFO [EC2Identity] EC2 registration was successful. Mar 6 00:56:36.749221 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.7175 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 6 00:56:36.749221 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.7176 INFO [CredentialRefresher] credentialRefresher has started Mar 6 00:56:36.749221 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.7176 INFO [CredentialRefresher] Starting credentials refresher loop Mar 6 00:56:36.749221 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.7480 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 6 00:56:36.749221 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.7483 INFO [CredentialRefresher] Credentials ready Mar 6 00:56:36.759224 sshd_keygen[2010]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 00:56:36.774195 amazon-ssm-agent[2169]: 2026-03-06 00:56:36.7485 INFO [CredentialRefresher] Next credential rotation will be in 29.9999909441 minutes Mar 6 00:56:36.806455 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 00:56:36.817636 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 00:56:36.826958 systemd[1]: Started sshd@0-172.31.16.50:22-68.220.241.50:51308.service - OpenSSH per-connection server daemon (68.220.241.50:51308). Mar 6 00:56:36.853337 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 00:56:36.854880 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 00:56:36.870237 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 00:56:36.923283 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 00:56:36.935126 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 00:56:36.944897 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 00:56:36.955444 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 00:56:37.238457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:56:37.243256 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 00:56:37.252310 systemd[1]: Startup finished in 3.773s (kernel) + 9.680s (initrd) + 10.951s (userspace) = 24.405s. Mar 6 00:56:37.255814 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 00:56:37.381937 sshd[2239]: Accepted publickey for core from 68.220.241.50 port 51308 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:37.386413 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:37.403581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 00:56:37.406105 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 00:56:37.431357 systemd-logind[1985]: New session 1 of user core. Mar 6 00:56:37.450977 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 00:56:37.456978 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 00:56:37.482474 (systemd)[2261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 00:56:37.489517 systemd-logind[1985]: New session c1 of user core. Mar 6 00:56:37.799517 systemd[2261]: Queued start job for default target default.target. Mar 6 00:56:37.810083 amazon-ssm-agent[2169]: 2026-03-06 00:56:37.8096 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 6 00:56:37.813924 systemd[2261]: Created slice app.slice - User Application Slice. Mar 6 00:56:37.814241 systemd[2261]: Reached target paths.target - Paths. Mar 6 00:56:37.814342 systemd[2261]: Reached target timers.target - Timers. Mar 6 00:56:37.818493 systemd[2261]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 00:56:37.853975 systemd[2261]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 00:56:37.854134 systemd[2261]: Reached target sockets.target - Sockets. Mar 6 00:56:37.854735 systemd[2261]: Reached target basic.target - Basic System. Mar 6 00:56:37.855519 systemd[2261]: Reached target default.target - Main User Target. Mar 6 00:56:37.855590 systemd[2261]: Startup finished in 347ms. Mar 6 00:56:37.856053 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 00:56:37.863570 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 00:56:37.911410 amazon-ssm-agent[2169]: 2026-03-06 00:56:37.8540 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2273) started Mar 6 00:56:38.068250 amazon-ssm-agent[2169]: 2026-03-06 00:56:37.8540 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 6 00:56:38.130582 systemd[1]: Started sshd@1-172.31.16.50:22-68.220.241.50:51312.service - OpenSSH per-connection server daemon (68.220.241.50:51312). Mar 6 00:56:38.465234 kubelet[2254]: E0306 00:56:38.464891 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 00:56:38.468645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 00:56:38.469144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 00:56:38.470461 systemd[1]: kubelet.service: Consumed 1.421s CPU time, 246.9M memory peak. Mar 6 00:56:38.644684 sshd[2285]: Accepted publickey for core from 68.220.241.50 port 51312 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:38.647125 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:38.657239 systemd-logind[1985]: New session 2 of user core. Mar 6 00:56:38.665438 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 00:56:38.893411 sshd[2294]: Connection closed by 68.220.241.50 port 51312 Mar 6 00:56:38.894390 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:38.902874 systemd-logind[1985]: Session 2 logged out. Waiting for processes to exit. Mar 6 00:56:38.904698 systemd[1]: sshd@1-172.31.16.50:22-68.220.241.50:51312.service: Deactivated successfully. Mar 6 00:56:38.909479 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 00:56:38.913470 systemd-logind[1985]: Removed session 2. Mar 6 00:56:38.988258 systemd[1]: Started sshd@2-172.31.16.50:22-68.220.241.50:51318.service - OpenSSH per-connection server daemon (68.220.241.50:51318). Mar 6 00:56:39.449834 sshd[2300]: Accepted publickey for core from 68.220.241.50 port 51318 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:39.452119 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:39.462261 systemd-logind[1985]: New session 3 of user core. Mar 6 00:56:39.471462 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 00:56:39.685688 sshd[2303]: Connection closed by 68.220.241.50 port 51318 Mar 6 00:56:39.686265 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:39.693214 systemd-logind[1985]: Session 3 logged out. Waiting for processes to exit. Mar 6 00:56:39.693356 systemd[1]: sshd@2-172.31.16.50:22-68.220.241.50:51318.service: Deactivated successfully. Mar 6 00:56:39.697995 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 00:56:39.704613 systemd-logind[1985]: Removed session 3. Mar 6 00:56:39.783202 systemd[1]: Started sshd@3-172.31.16.50:22-68.220.241.50:51326.service - OpenSSH per-connection server daemon (68.220.241.50:51326). Mar 6 00:56:40.257696 sshd[2309]: Accepted publickey for core from 68.220.241.50 port 51326 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:40.260066 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:40.269205 systemd-logind[1985]: New session 4 of user core. Mar 6 00:56:40.275386 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 00:56:40.504090 sshd[2312]: Connection closed by 68.220.241.50 port 51326 Mar 6 00:56:40.505118 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:40.512605 systemd-logind[1985]: Session 4 logged out. Waiting for processes to exit. Mar 6 00:56:40.514080 systemd[1]: sshd@3-172.31.16.50:22-68.220.241.50:51326.service: Deactivated successfully. Mar 6 00:56:40.518920 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 00:56:40.523245 systemd-logind[1985]: Removed session 4. Mar 6 00:56:40.610659 systemd[1]: Started sshd@4-172.31.16.50:22-68.220.241.50:51334.service - OpenSSH per-connection server daemon (68.220.241.50:51334). Mar 6 00:56:41.079194 sshd[2318]: Accepted publickey for core from 68.220.241.50 port 51334 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:41.081235 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:41.091758 systemd-logind[1985]: New session 5 of user core. Mar 6 00:56:41.098447 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 00:56:41.262306 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 00:56:41.262956 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:41.281074 sudo[2322]: pam_unix(sudo:session): session closed for user root Mar 6 00:56:41.362032 sshd[2321]: Connection closed by 68.220.241.50 port 51334 Mar 6 00:56:41.363478 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:41.371546 systemd[1]: sshd@4-172.31.16.50:22-68.220.241.50:51334.service: Deactivated successfully. Mar 6 00:56:41.376568 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 00:56:41.379309 systemd-logind[1985]: Session 5 logged out. Waiting for processes to exit. Mar 6 00:56:41.382537 systemd-logind[1985]: Removed session 5. Mar 6 00:56:41.461220 systemd[1]: Started sshd@5-172.31.16.50:22-68.220.241.50:51350.service - OpenSSH per-connection server daemon (68.220.241.50:51350). Mar 6 00:56:41.635604 systemd-resolved[1897]: Clock change detected. Flushing caches. Mar 6 00:56:41.676337 sshd[2328]: Accepted publickey for core from 68.220.241.50 port 51350 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:41.679110 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:41.687141 systemd-logind[1985]: New session 6 of user core. Mar 6 00:56:41.701101 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 00:56:41.847745 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 00:56:41.848506 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:41.858439 sudo[2333]: pam_unix(sudo:session): session closed for user root Mar 6 00:56:41.869245 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 00:56:41.870416 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:41.888596 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 00:56:41.957015 augenrules[2355]: No rules Mar 6 00:56:41.959689 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 00:56:41.960418 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 00:56:41.964221 sudo[2332]: pam_unix(sudo:session): session closed for user root Mar 6 00:56:42.044788 sshd[2331]: Connection closed by 68.220.241.50 port 51350 Mar 6 00:56:42.045624 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:42.052880 systemd-logind[1985]: Session 6 logged out. Waiting for processes to exit. Mar 6 00:56:42.054807 systemd[1]: sshd@5-172.31.16.50:22-68.220.241.50:51350.service: Deactivated successfully. Mar 6 00:56:42.058603 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 00:56:42.063278 systemd-logind[1985]: Removed session 6. Mar 6 00:56:42.142084 systemd[1]: Started sshd@6-172.31.16.50:22-68.220.241.50:51358.service - OpenSSH per-connection server daemon (68.220.241.50:51358). Mar 6 00:56:42.620232 sshd[2364]: Accepted publickey for core from 68.220.241.50 port 51358 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:42.623772 sshd-session[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:42.637066 systemd-logind[1985]: New session 7 of user core. Mar 6 00:56:42.646142 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 00:56:42.787285 sudo[2368]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 00:56:42.788070 sudo[2368]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:43.344132 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 00:56:43.367491 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 00:56:43.758935 dockerd[2385]: time="2026-03-06T00:56:43.756920630Z" level=info msg="Starting up" Mar 6 00:56:43.762250 dockerd[2385]: time="2026-03-06T00:56:43.761052686Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 00:56:43.783500 dockerd[2385]: time="2026-03-06T00:56:43.783427418Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 00:56:43.828779 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport62630907-merged.mount: Deactivated successfully. Mar 6 00:56:43.864986 dockerd[2385]: time="2026-03-06T00:56:43.864925335Z" level=info msg="Loading containers: start." Mar 6 00:56:43.881056 kernel: Initializing XFRM netlink socket Mar 6 00:56:44.247716 (udev-worker)[2408]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:56:44.330032 systemd-networkd[1895]: docker0: Link UP Mar 6 00:56:44.343884 dockerd[2385]: time="2026-03-06T00:56:44.342956653Z" level=info msg="Loading containers: done." Mar 6 00:56:44.375520 dockerd[2385]: time="2026-03-06T00:56:44.375401317Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 00:56:44.375738 dockerd[2385]: time="2026-03-06T00:56:44.375572617Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 00:56:44.375738 dockerd[2385]: time="2026-03-06T00:56:44.375724993Z" level=info msg="Initializing buildkit" Mar 6 00:56:44.438741 dockerd[2385]: time="2026-03-06T00:56:44.438680569Z" level=info msg="Completed buildkit initialization" Mar 6 00:56:44.455230 dockerd[2385]: time="2026-03-06T00:56:44.455151650Z" level=info msg="Daemon has completed initialization" Mar 6 00:56:44.455521 dockerd[2385]: time="2026-03-06T00:56:44.455244842Z" level=info msg="API listen on /run/docker.sock" Mar 6 00:56:44.455874 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 00:56:44.821338 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck181603402-merged.mount: Deactivated successfully. Mar 6 00:56:45.483206 containerd[2014]: time="2026-03-06T00:56:45.482983371Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 6 00:56:46.118640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870018131.mount: Deactivated successfully. Mar 6 00:56:47.561398 containerd[2014]: time="2026-03-06T00:56:47.561290441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:47.564096 containerd[2014]: time="2026-03-06T00:56:47.564014021Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 6 00:56:47.567879 containerd[2014]: time="2026-03-06T00:56:47.566969705Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:47.577082 containerd[2014]: time="2026-03-06T00:56:47.576978029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:47.581483 containerd[2014]: time="2026-03-06T00:56:47.581391557Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.098340566s" Mar 6 00:56:47.581483 containerd[2014]: time="2026-03-06T00:56:47.581471177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 6 00:56:47.582782 containerd[2014]: time="2026-03-06T00:56:47.582660917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 6 00:56:48.466596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 00:56:48.474176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:56:48.975449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:56:48.991494 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 00:56:49.118121 kubelet[2668]: E0306 00:56:49.117967 2668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 00:56:49.127727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 00:56:49.128279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 00:56:49.130149 systemd[1]: kubelet.service: Consumed 425ms CPU time, 107.3M memory peak. Mar 6 00:56:49.280179 containerd[2014]: time="2026-03-06T00:56:49.279726977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:49.282790 containerd[2014]: time="2026-03-06T00:56:49.282152801Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 6 00:56:49.285199 containerd[2014]: time="2026-03-06T00:56:49.285132041Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:49.292206 containerd[2014]: time="2026-03-06T00:56:49.292134258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:49.294284 containerd[2014]: time="2026-03-06T00:56:49.294214218Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.711061373s" Mar 6 00:56:49.294284 containerd[2014]: time="2026-03-06T00:56:49.294281178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 6 00:56:49.295842 containerd[2014]: time="2026-03-06T00:56:49.295758798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 6 00:56:50.413399 containerd[2014]: time="2026-03-06T00:56:50.413324095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:50.415241 containerd[2014]: time="2026-03-06T00:56:50.415163683Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 6 00:56:50.417888 containerd[2014]: time="2026-03-06T00:56:50.416169871Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:50.422157 containerd[2014]: time="2026-03-06T00:56:50.422088751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:50.424620 containerd[2014]: time="2026-03-06T00:56:50.424535323Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.128707129s" Mar 6 00:56:50.424922 containerd[2014]: time="2026-03-06T00:56:50.424875175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 6 00:56:50.425766 containerd[2014]: time="2026-03-06T00:56:50.425685823Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 6 00:56:51.991910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228310138.mount: Deactivated successfully. Mar 6 00:56:52.477196 containerd[2014]: time="2026-03-06T00:56:52.476891109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:52.480220 containerd[2014]: time="2026-03-06T00:56:52.479377389Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 6 00:56:52.481925 containerd[2014]: time="2026-03-06T00:56:52.481784253Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:52.488390 containerd[2014]: time="2026-03-06T00:56:52.488276109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:52.490558 containerd[2014]: time="2026-03-06T00:56:52.490333833Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 2.064290698s" Mar 6 00:56:52.490558 containerd[2014]: time="2026-03-06T00:56:52.490407381Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 6 00:56:52.491795 containerd[2014]: time="2026-03-06T00:56:52.491746485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 6 00:56:53.012131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575657652.mount: Deactivated successfully. Mar 6 00:56:54.272780 containerd[2014]: time="2026-03-06T00:56:54.272670442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.275528 containerd[2014]: time="2026-03-06T00:56:54.274743526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 6 00:56:54.276656 containerd[2014]: time="2026-03-06T00:56:54.276560602Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.284531 containerd[2014]: time="2026-03-06T00:56:54.284442418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.287264 containerd[2014]: time="2026-03-06T00:56:54.287163802Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.795184157s" Mar 6 00:56:54.287264 containerd[2014]: time="2026-03-06T00:56:54.287250922Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 6 00:56:54.288262 containerd[2014]: time="2026-03-06T00:56:54.288181006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 6 00:56:54.768913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072987320.mount: Deactivated successfully. Mar 6 00:56:54.777763 containerd[2014]: time="2026-03-06T00:56:54.777686905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.779245 containerd[2014]: time="2026-03-06T00:56:54.779165041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 6 00:56:54.781876 containerd[2014]: time="2026-03-06T00:56:54.780170761Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.784296 containerd[2014]: time="2026-03-06T00:56:54.784233445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.786712 containerd[2014]: time="2026-03-06T00:56:54.786628333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 498.354675ms" Mar 6 00:56:54.786712 containerd[2014]: time="2026-03-06T00:56:54.786698977Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 6 00:56:54.787589 containerd[2014]: time="2026-03-06T00:56:54.787501933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 6 00:56:55.319968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989969101.mount: Deactivated successfully. Mar 6 00:56:56.726453 containerd[2014]: time="2026-03-06T00:56:56.726380714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:56.729500 containerd[2014]: time="2026-03-06T00:56:56.729449930Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 6 00:56:56.730652 containerd[2014]: time="2026-03-06T00:56:56.730580738Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:56.737369 containerd[2014]: time="2026-03-06T00:56:56.737277951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:56.742017 containerd[2014]: time="2026-03-06T00:56:56.741938775Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.95437245s" Mar 6 00:56:56.742017 containerd[2014]: time="2026-03-06T00:56:56.742004907Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 6 00:56:59.379005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 00:56:59.384230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:56:59.752158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:56:59.765738 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 00:56:59.839025 kubelet[2835]: E0306 00:56:59.838951 2835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 00:56:59.843525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 00:56:59.844053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 00:56:59.844882 systemd[1]: kubelet.service: Consumed 314ms CPU time, 107M memory peak. Mar 6 00:57:03.994251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:03.995304 systemd[1]: kubelet.service: Consumed 314ms CPU time, 107M memory peak. Mar 6 00:57:03.999803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:04.070110 systemd[1]: Reload requested from client PID 2849 ('systemctl') (unit session-7.scope)... Mar 6 00:57:04.070171 systemd[1]: Reloading... Mar 6 00:57:04.378870 zram_generator::config[2904]: No configuration found. Mar 6 00:57:04.896819 systemd[1]: Reloading finished in 825 ms. Mar 6 00:57:04.929716 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 6 00:57:05.006060 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 00:57:05.006257 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 00:57:05.007986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:05.008082 systemd[1]: kubelet.service: Consumed 267ms CPU time, 94.9M memory peak. Mar 6 00:57:05.013150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:05.553670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:05.569935 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 00:57:05.661914 kubelet[2961]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 00:57:05.661914 kubelet[2961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 00:57:05.663239 kubelet[2961]: I0306 00:57:05.663111 2961 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 00:57:06.674655 kubelet[2961]: I0306 00:57:06.674595 2961 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 00:57:06.675357 kubelet[2961]: I0306 00:57:06.675091 2961 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 00:57:06.675357 kubelet[2961]: I0306 00:57:06.675154 2961 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 00:57:06.675357 kubelet[2961]: I0306 00:57:06.675169 2961 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 00:57:06.676820 kubelet[2961]: I0306 00:57:06.676298 2961 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 00:57:06.688313 kubelet[2961]: E0306 00:57:06.688257 2961 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 00:57:06.690279 kubelet[2961]: I0306 00:57:06.690228 2961 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 00:57:06.698341 kubelet[2961]: I0306 00:57:06.698286 2961 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 00:57:06.705182 kubelet[2961]: I0306 00:57:06.705102 2961 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 00:57:06.705918 kubelet[2961]: I0306 00:57:06.705810 2961 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 00:57:06.706214 kubelet[2961]: I0306 00:57:06.705899 2961 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 00:57:06.706448 kubelet[2961]: I0306 00:57:06.706218 2961 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 00:57:06.706448 kubelet[2961]: I0306 00:57:06.706242 2961 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 00:57:06.706448 kubelet[2961]: I0306 00:57:06.706436 2961 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 00:57:06.709326 kubelet[2961]: I0306 00:57:06.709264 2961 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:06.714696 kubelet[2961]: I0306 00:57:06.712476 2961 kubelet.go:475] "Attempting to sync node with API server" Mar 6 00:57:06.714696 kubelet[2961]: I0306 00:57:06.712527 2961 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 00:57:06.714696 kubelet[2961]: I0306 00:57:06.712585 2961 kubelet.go:387] "Adding apiserver pod source" Mar 6 00:57:06.714696 kubelet[2961]: I0306 00:57:06.712615 2961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 00:57:06.714696 kubelet[2961]: E0306 00:57:06.712711 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-50&limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 00:57:06.715056 kubelet[2961]: E0306 00:57:06.714919 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 00:57:06.715703 kubelet[2961]: I0306 00:57:06.715652 2961 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 00:57:06.716730 kubelet[2961]: I0306 00:57:06.716662 2961 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 00:57:06.716730 kubelet[2961]: I0306 00:57:06.716736 2961 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 00:57:06.716968 kubelet[2961]: W0306 00:57:06.716898 2961 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 00:57:06.722763 kubelet[2961]: I0306 00:57:06.722709 2961 server.go:1262] "Started kubelet" Mar 6 00:57:06.728679 kubelet[2961]: I0306 00:57:06.728616 2961 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 00:57:06.729310 kubelet[2961]: I0306 00:57:06.729212 2961 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 00:57:06.729467 kubelet[2961]: I0306 00:57:06.729321 2961 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 00:57:06.730017 kubelet[2961]: I0306 00:57:06.729956 2961 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 00:57:06.731196 kubelet[2961]: I0306 00:57:06.731151 2961 server.go:310] "Adding debug handlers to kubelet server" Mar 6 00:57:06.739123 kubelet[2961]: E0306 00:57:06.736617 2961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.50:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-50.189a1a941a504ea0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-50,UID:ip-172-31-16-50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-50,},FirstTimestamp:2026-03-06 00:57:06.722664096 +0000 UTC m=+1.144538755,LastTimestamp:2026-03-06 00:57:06.722664096 +0000 UTC m=+1.144538755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-50,}" Mar 6 00:57:06.742545 kubelet[2961]: I0306 00:57:06.742503 2961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 00:57:06.748788 kubelet[2961]: I0306 00:57:06.748692 2961 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 00:57:06.754996 kubelet[2961]: I0306 00:57:06.754950 2961 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 00:57:06.756116 kubelet[2961]: E0306 00:57:06.756073 2961 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-50\" not found" Mar 6 00:57:06.756455 kubelet[2961]: I0306 00:57:06.756421 2961 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 00:57:06.756688 kubelet[2961]: I0306 00:57:06.756663 2961 reconciler.go:29] "Reconciler: start to sync state" Mar 6 00:57:06.757551 kubelet[2961]: E0306 00:57:06.757485 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-50?timeout=10s\": dial tcp 172.31.16.50:6443: connect: connection refused" interval="200ms" Mar 6 00:57:06.759723 kubelet[2961]: I0306 00:57:06.759664 2961 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 00:57:06.760954 kubelet[2961]: E0306 00:57:06.760896 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 00:57:06.763945 kubelet[2961]: I0306 00:57:06.763458 2961 factory.go:223] Registration of the containerd container factory successfully Mar 6 00:57:06.763945 kubelet[2961]: I0306 00:57:06.763501 2961 factory.go:223] Registration of the systemd container factory successfully Mar 6 00:57:06.767271 kubelet[2961]: E0306 00:57:06.767194 2961 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 00:57:06.800035 kubelet[2961]: I0306 00:57:06.799964 2961 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 00:57:06.803251 kubelet[2961]: I0306 00:57:06.803173 2961 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 00:57:06.803251 kubelet[2961]: I0306 00:57:06.803244 2961 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 00:57:06.803442 kubelet[2961]: I0306 00:57:06.803310 2961 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 00:57:06.803442 kubelet[2961]: E0306 00:57:06.803409 2961 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 00:57:06.806435 kubelet[2961]: E0306 00:57:06.806251 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 00:57:06.818758 kubelet[2961]: I0306 00:57:06.818676 2961 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 00:57:06.818758 kubelet[2961]: I0306 00:57:06.818720 2961 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 00:57:06.818758 kubelet[2961]: I0306 00:57:06.818757 2961 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:06.822623 kubelet[2961]: I0306 00:57:06.822576 2961 policy_none.go:49] "None policy: Start" Mar 6 00:57:06.822749 kubelet[2961]: I0306 00:57:06.822646 2961 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 00:57:06.822749 kubelet[2961]: I0306 00:57:06.822676 2961 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 00:57:06.824543 kubelet[2961]: I0306 00:57:06.824496 2961 policy_none.go:47] "Start" Mar 6 00:57:06.834448 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 00:57:06.858519 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 00:57:06.858933 kubelet[2961]: E0306 00:57:06.858871 2961 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-50\" not found" Mar 6 00:57:06.867407 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 00:57:06.880126 kubelet[2961]: E0306 00:57:06.880071 2961 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 00:57:06.880472 kubelet[2961]: I0306 00:57:06.880385 2961 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 00:57:06.880472 kubelet[2961]: I0306 00:57:06.880416 2961 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 00:57:06.882790 kubelet[2961]: I0306 00:57:06.882737 2961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 00:57:06.885862 kubelet[2961]: E0306 00:57:06.885779 2961 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 00:57:06.886383 kubelet[2961]: E0306 00:57:06.886339 2961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-50\" not found" Mar 6 00:57:06.927404 systemd[1]: Created slice kubepods-burstable-poda5981216b6b03901f6948e0b722ffec4.slice - libcontainer container kubepods-burstable-poda5981216b6b03901f6948e0b722ffec4.slice. Mar 6 00:57:06.940918 kubelet[2961]: E0306 00:57:06.940542 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:06.945457 systemd[1]: Created slice kubepods-burstable-pod428ca519ecd4f3ccc351bb3c188e8dc1.slice - libcontainer container kubepods-burstable-pod428ca519ecd4f3ccc351bb3c188e8dc1.slice. Mar 6 00:57:06.951776 kubelet[2961]: E0306 00:57:06.951420 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:06.957771 systemd[1]: Created slice kubepods-burstable-podfa4bbe22e36e5f14bbc1dd1e8657f1d5.slice - libcontainer container kubepods-burstable-podfa4bbe22e36e5f14bbc1dd1e8657f1d5.slice. Mar 6 00:57:06.959356 kubelet[2961]: E0306 00:57:06.958983 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-50?timeout=10s\": dial tcp 172.31.16.50:6443: connect: connection refused" interval="400ms" Mar 6 00:57:06.962106 kubelet[2961]: E0306 00:57:06.962045 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:06.988103 kubelet[2961]: I0306 00:57:06.988036 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-50" Mar 6 00:57:06.988886 kubelet[2961]: E0306 00:57:06.988805 2961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.50:6443/api/v1/nodes\": dial tcp 172.31.16.50:6443: connect: connection refused" node="ip-172-31-16-50" Mar 6 00:57:07.058274 kubelet[2961]: I0306 00:57:07.058219 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5981216b6b03901f6948e0b722ffec4-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-50\" (UID: \"a5981216b6b03901f6948e0b722ffec4\") " pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:07.058402 kubelet[2961]: I0306 00:57:07.058282 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:07.058402 kubelet[2961]: I0306 00:57:07.058321 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:07.058402 kubelet[2961]: I0306 00:57:07.058357 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:07.058402 kubelet[2961]: I0306 00:57:07.058391 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:07.058629 kubelet[2961]: I0306 00:57:07.058426 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:07.058629 kubelet[2961]: I0306 00:57:07.058462 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa4bbe22e36e5f14bbc1dd1e8657f1d5-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-50\" (UID: \"fa4bbe22e36e5f14bbc1dd1e8657f1d5\") " pod="kube-system/kube-scheduler-ip-172-31-16-50" Mar 6 00:57:07.058629 kubelet[2961]: I0306 00:57:07.058495 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5981216b6b03901f6948e0b722ffec4-ca-certs\") pod \"kube-apiserver-ip-172-31-16-50\" (UID: \"a5981216b6b03901f6948e0b722ffec4\") " pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:07.058629 kubelet[2961]: I0306 00:57:07.058550 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5981216b6b03901f6948e0b722ffec4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-50\" (UID: \"a5981216b6b03901f6948e0b722ffec4\") " pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:07.193624 kubelet[2961]: I0306 00:57:07.193515 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-50" Mar 6 00:57:07.194328 kubelet[2961]: E0306 00:57:07.194248 2961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.50:6443/api/v1/nodes\": dial tcp 172.31.16.50:6443: connect: connection refused" node="ip-172-31-16-50" Mar 6 00:57:07.245786 containerd[2014]: time="2026-03-06T00:57:07.245628671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-50,Uid:a5981216b6b03901f6948e0b722ffec4,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:07.255382 containerd[2014]: time="2026-03-06T00:57:07.255302603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-50,Uid:428ca519ecd4f3ccc351bb3c188e8dc1,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:07.265697 containerd[2014]: time="2026-03-06T00:57:07.265631507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-50,Uid:fa4bbe22e36e5f14bbc1dd1e8657f1d5,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:07.360386 kubelet[2961]: E0306 00:57:07.360310 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-50?timeout=10s\": dial tcp 172.31.16.50:6443: connect: connection refused" interval="800ms" Mar 6 00:57:07.590206 kubelet[2961]: E0306 00:57:07.590006 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 00:57:07.598643 kubelet[2961]: I0306 00:57:07.597820 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-50" Mar 6 00:57:07.599112 kubelet[2961]: E0306 00:57:07.599054 2961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.50:6443/api/v1/nodes\": dial tcp 172.31.16.50:6443: connect: connection refused" node="ip-172-31-16-50" Mar 6 00:57:07.750400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344883890.mount: Deactivated successfully. Mar 6 00:57:07.757579 kubelet[2961]: E0306 00:57:07.757520 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 00:57:07.765126 containerd[2014]: time="2026-03-06T00:57:07.765025669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:57:07.766532 containerd[2014]: time="2026-03-06T00:57:07.766450729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 6 00:57:07.768583 containerd[2014]: time="2026-03-06T00:57:07.768409345Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:57:07.772896 containerd[2014]: time="2026-03-06T00:57:07.772212325Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:57:07.776340 containerd[2014]: time="2026-03-06T00:57:07.775135165Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:57:07.776340 containerd[2014]: time="2026-03-06T00:57:07.775653049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 00:57:07.776340 containerd[2014]: time="2026-03-06T00:57:07.776113681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 6 00:57:07.777488 containerd[2014]: time="2026-03-06T00:57:07.777416269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:57:07.779780 kubelet[2961]: E0306 00:57:07.779717 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-50&limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 00:57:07.782763 containerd[2014]: time="2026-03-06T00:57:07.782694649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.806366ms" Mar 6 00:57:07.787465 containerd[2014]: time="2026-03-06T00:57:07.787385725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 530.296274ms" Mar 6 00:57:07.789086 containerd[2014]: time="2026-03-06T00:57:07.789026653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 521.868794ms" Mar 6 00:57:07.870503 containerd[2014]: time="2026-03-06T00:57:07.870231110Z" level=info msg="connecting to shim 3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c" address="unix:///run/containerd/s/75af760a16a0af2047c4fc3dabc72bc65ac55c152ed66f28c3c55ca8693fca9e" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:07.874977 kubelet[2961]: E0306 00:57:07.874897 2961 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 00:57:07.891614 containerd[2014]: time="2026-03-06T00:57:07.891531254Z" level=info msg="connecting to shim 922d0cf12a1d363beb14d8c3d602df7ac5029879af0f5cb8de4220b05bf17f07" address="unix:///run/containerd/s/9a29b465940e3df1447a6ba3c9e437be9cb113a62df129012728e0e57eb4e1e6" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:07.893022 containerd[2014]: time="2026-03-06T00:57:07.892934330Z" level=info msg="connecting to shim 9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736" address="unix:///run/containerd/s/23b8c25dcfb20e572b0e0185e8c584e909b26d7395fef53f00d20a0435c78e21" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:07.955342 systemd[1]: Started cri-containerd-3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c.scope - libcontainer container 3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c. Mar 6 00:57:07.981389 systemd[1]: Started cri-containerd-922d0cf12a1d363beb14d8c3d602df7ac5029879af0f5cb8de4220b05bf17f07.scope - libcontainer container 922d0cf12a1d363beb14d8c3d602df7ac5029879af0f5cb8de4220b05bf17f07. Mar 6 00:57:07.994463 systemd[1]: Started cri-containerd-9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736.scope - libcontainer container 9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736. Mar 6 00:57:08.121944 containerd[2014]: time="2026-03-06T00:57:08.121730351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-50,Uid:fa4bbe22e36e5f14bbc1dd1e8657f1d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736\"" Mar 6 00:57:08.138884 containerd[2014]: time="2026-03-06T00:57:08.138137339Z" level=info msg="CreateContainer within sandbox \"9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 00:57:08.148433 containerd[2014]: time="2026-03-06T00:57:08.148344311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-50,Uid:a5981216b6b03901f6948e0b722ffec4,Namespace:kube-system,Attempt:0,} returns sandbox id \"922d0cf12a1d363beb14d8c3d602df7ac5029879af0f5cb8de4220b05bf17f07\"" Mar 6 00:57:08.162059 containerd[2014]: time="2026-03-06T00:57:08.161989199Z" level=info msg="CreateContainer within sandbox \"922d0cf12a1d363beb14d8c3d602df7ac5029879af0f5cb8de4220b05bf17f07\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 00:57:08.163812 kubelet[2961]: E0306 00:57:08.163701 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-50?timeout=10s\": dial tcp 172.31.16.50:6443: connect: connection refused" interval="1.6s" Mar 6 00:57:08.165558 containerd[2014]: time="2026-03-06T00:57:08.165476255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-50,Uid:428ca519ecd4f3ccc351bb3c188e8dc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c\"" Mar 6 00:57:08.166163 containerd[2014]: time="2026-03-06T00:57:08.165951239Z" level=info msg="Container 4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:08.175912 containerd[2014]: time="2026-03-06T00:57:08.175625363Z" level=info msg="CreateContainer within sandbox \"3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 00:57:08.181869 containerd[2014]: time="2026-03-06T00:57:08.181765067Z" level=info msg="Container bffcb9745d3430afe85fb794aed7f421b8b514028fbc26023aa9eec3c522b0ac: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:08.184027 containerd[2014]: time="2026-03-06T00:57:08.183953747Z" level=info msg="CreateContainer within sandbox \"9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7\"" Mar 6 00:57:08.187872 containerd[2014]: time="2026-03-06T00:57:08.186562391Z" level=info msg="StartContainer for \"4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7\"" Mar 6 00:57:08.190494 containerd[2014]: time="2026-03-06T00:57:08.190403111Z" level=info msg="connecting to shim 4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7" address="unix:///run/containerd/s/23b8c25dcfb20e572b0e0185e8c584e909b26d7395fef53f00d20a0435c78e21" protocol=ttrpc version=3 Mar 6 00:57:08.197330 containerd[2014]: time="2026-03-06T00:57:08.197249867Z" level=info msg="CreateContainer within sandbox \"922d0cf12a1d363beb14d8c3d602df7ac5029879af0f5cb8de4220b05bf17f07\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bffcb9745d3430afe85fb794aed7f421b8b514028fbc26023aa9eec3c522b0ac\"" Mar 6 00:57:08.198815 containerd[2014]: time="2026-03-06T00:57:08.198764207Z" level=info msg="StartContainer for \"bffcb9745d3430afe85fb794aed7f421b8b514028fbc26023aa9eec3c522b0ac\"" Mar 6 00:57:08.200143 containerd[2014]: time="2026-03-06T00:57:08.200090063Z" level=info msg="Container 0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:08.205587 containerd[2014]: time="2026-03-06T00:57:08.205527023Z" level=info msg="connecting to shim bffcb9745d3430afe85fb794aed7f421b8b514028fbc26023aa9eec3c522b0ac" address="unix:///run/containerd/s/9a29b465940e3df1447a6ba3c9e437be9cb113a62df129012728e0e57eb4e1e6" protocol=ttrpc version=3 Mar 6 00:57:08.214245 containerd[2014]: time="2026-03-06T00:57:08.214184304Z" level=info msg="CreateContainer within sandbox \"3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f\"" Mar 6 00:57:08.215489 containerd[2014]: time="2026-03-06T00:57:08.215439768Z" level=info msg="StartContainer for \"0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f\"" Mar 6 00:57:08.218557 containerd[2014]: time="2026-03-06T00:57:08.218500764Z" level=info msg="connecting to shim 0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f" address="unix:///run/containerd/s/75af760a16a0af2047c4fc3dabc72bc65ac55c152ed66f28c3c55ca8693fca9e" protocol=ttrpc version=3 Mar 6 00:57:08.248216 systemd[1]: Started cri-containerd-4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7.scope - libcontainer container 4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7. Mar 6 00:57:08.276148 systemd[1]: Started cri-containerd-bffcb9745d3430afe85fb794aed7f421b8b514028fbc26023aa9eec3c522b0ac.scope - libcontainer container bffcb9745d3430afe85fb794aed7f421b8b514028fbc26023aa9eec3c522b0ac. Mar 6 00:57:08.289498 systemd[1]: Started cri-containerd-0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f.scope - libcontainer container 0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f. Mar 6 00:57:08.404641 kubelet[2961]: I0306 00:57:08.403145 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-50" Mar 6 00:57:08.404641 kubelet[2961]: E0306 00:57:08.403704 2961 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.50:6443/api/v1/nodes\": dial tcp 172.31.16.50:6443: connect: connection refused" node="ip-172-31-16-50" Mar 6 00:57:08.446756 containerd[2014]: time="2026-03-06T00:57:08.445465513Z" level=info msg="StartContainer for \"4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7\" returns successfully" Mar 6 00:57:08.451934 containerd[2014]: time="2026-03-06T00:57:08.450789985Z" level=info msg="StartContainer for \"bffcb9745d3430afe85fb794aed7f421b8b514028fbc26023aa9eec3c522b0ac\" returns successfully" Mar 6 00:57:08.473037 containerd[2014]: time="2026-03-06T00:57:08.472954681Z" level=info msg="StartContainer for \"0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f\" returns successfully" Mar 6 00:57:08.693473 kubelet[2961]: E0306 00:57:08.693400 2961 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 00:57:08.834407 kubelet[2961]: E0306 00:57:08.834343 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:08.839462 kubelet[2961]: E0306 00:57:08.839394 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:08.850616 kubelet[2961]: E0306 00:57:08.850325 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:09.854202 kubelet[2961]: E0306 00:57:09.853782 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:09.854801 kubelet[2961]: E0306 00:57:09.854722 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:10.009896 kubelet[2961]: I0306 00:57:10.009329 2961 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-50" Mar 6 00:57:10.855607 kubelet[2961]: E0306 00:57:10.855560 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:11.607891 kubelet[2961]: E0306 00:57:11.607844 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:11.967442 kubelet[2961]: E0306 00:57:11.967390 2961 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:14.524875 kubelet[2961]: E0306 00:57:14.524540 2961 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-50\" not found" node="ip-172-31-16-50" Mar 6 00:57:14.656867 kubelet[2961]: E0306 00:57:14.656704 2961 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-50.189a1a941a504ea0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-50,UID:ip-172-31-16-50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-50,},FirstTimestamp:2026-03-06 00:57:06.722664096 +0000 UTC m=+1.144538755,LastTimestamp:2026-03-06 00:57:06.722664096 +0000 UTC m=+1.144538755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-50,}" Mar 6 00:57:14.696852 kubelet[2961]: I0306 00:57:14.696777 2961 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-50" Mar 6 00:57:14.719164 kubelet[2961]: I0306 00:57:14.718785 2961 apiserver.go:52] "Watching apiserver" Mar 6 00:57:14.741958 kubelet[2961]: E0306 00:57:14.741767 2961 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-50.189a1a941cf748fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-50,UID:ip-172-31-16-50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-16-50,},FirstTimestamp:2026-03-06 00:57:06.767161596 +0000 UTC m=+1.189036231,LastTimestamp:2026-03-06 00:57:06.767161596 +0000 UTC m=+1.189036231,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-50,}" Mar 6 00:57:14.761499 kubelet[2961]: I0306 00:57:14.761413 2961 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 00:57:14.764864 kubelet[2961]: I0306 00:57:14.763389 2961 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:14.795916 kubelet[2961]: E0306 00:57:14.795756 2961 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:14.796142 kubelet[2961]: I0306 00:57:14.796115 2961 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:14.808532 kubelet[2961]: E0306 00:57:14.807406 2961 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:14.808532 kubelet[2961]: I0306 00:57:14.807465 2961 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-50" Mar 6 00:57:14.819519 kubelet[2961]: E0306 00:57:14.819470 2961 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-50" Mar 6 00:57:17.058939 systemd[1]: Reload requested from client PID 3249 ('systemctl') (unit session-7.scope)... Mar 6 00:57:17.058971 systemd[1]: Reloading... Mar 6 00:57:17.266903 zram_generator::config[3293]: No configuration found. Mar 6 00:57:17.595399 update_engine[1986]: I20260306 00:57:17.594447 1986 update_attempter.cc:509] Updating boot flags... Mar 6 00:57:17.924878 systemd[1]: Reloading finished in 865 ms. Mar 6 00:57:18.054223 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:18.104393 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 00:57:18.105153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:18.105254 systemd[1]: kubelet.service: Consumed 2.078s CPU time, 121.7M memory peak. Mar 6 00:57:18.114457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:18.573819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:18.593133 (kubelet)[3452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 00:57:18.721906 kubelet[3452]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 00:57:18.721906 kubelet[3452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 00:57:18.721906 kubelet[3452]: I0306 00:57:18.720002 3452 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 00:57:18.742267 kubelet[3452]: I0306 00:57:18.742211 3452 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 6 00:57:18.742488 kubelet[3452]: I0306 00:57:18.742460 3452 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 00:57:18.742684 kubelet[3452]: I0306 00:57:18.742655 3452 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 6 00:57:18.742822 kubelet[3452]: I0306 00:57:18.742794 3452 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 00:57:18.743868 kubelet[3452]: I0306 00:57:18.743493 3452 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 00:57:18.762880 kubelet[3452]: I0306 00:57:18.760751 3452 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 00:57:18.771238 kubelet[3452]: I0306 00:57:18.771184 3452 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 00:57:18.792139 kubelet[3452]: I0306 00:57:18.792096 3452 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 00:57:18.801588 kubelet[3452]: I0306 00:57:18.801488 3452 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 6 00:57:18.803389 kubelet[3452]: I0306 00:57:18.803315 3452 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 00:57:18.803925 kubelet[3452]: I0306 00:57:18.803586 3452 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 00:57:18.805592 kubelet[3452]: I0306 00:57:18.804926 3452 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 00:57:18.805592 kubelet[3452]: I0306 00:57:18.804979 3452 container_manager_linux.go:306] "Creating device plugin manager" Mar 6 00:57:18.805592 kubelet[3452]: I0306 00:57:18.805052 3452 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 6 00:57:18.805592 kubelet[3452]: I0306 00:57:18.805503 3452 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:18.807221 kubelet[3452]: I0306 00:57:18.807179 3452 kubelet.go:475] "Attempting to sync node with API server" Mar 6 00:57:18.808037 kubelet[3452]: I0306 00:57:18.807993 3452 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 00:57:18.809899 kubelet[3452]: I0306 00:57:18.808272 3452 kubelet.go:387] "Adding apiserver pod source" Mar 6 00:57:18.809899 kubelet[3452]: I0306 00:57:18.808337 3452 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 00:57:18.814948 kubelet[3452]: I0306 00:57:18.814745 3452 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 00:57:18.816914 kubelet[3452]: I0306 00:57:18.816294 3452 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 00:57:18.816914 kubelet[3452]: I0306 00:57:18.816392 3452 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 6 00:57:18.826231 kubelet[3452]: I0306 00:57:18.826103 3452 server.go:1262] "Started kubelet" Mar 6 00:57:18.838819 kubelet[3452]: I0306 00:57:18.838749 3452 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 00:57:18.847928 kubelet[3452]: I0306 00:57:18.846320 3452 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 00:57:18.865179 sudo[3466]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 6 00:57:18.866696 sudo[3466]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 6 00:57:18.885526 kubelet[3452]: I0306 00:57:18.882757 3452 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 00:57:18.885918 kubelet[3452]: I0306 00:57:18.885809 3452 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 6 00:57:18.889908 kubelet[3452]: I0306 00:57:18.889318 3452 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 00:57:18.916120 kubelet[3452]: I0306 00:57:18.915247 3452 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 00:57:18.929970 kubelet[3452]: I0306 00:57:18.929673 3452 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 6 00:57:18.931497 kubelet[3452]: E0306 00:57:18.930473 3452 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-50\" not found" Mar 6 00:57:18.957724 kubelet[3452]: I0306 00:57:18.954288 3452 server.go:310] "Adding debug handlers to kubelet server" Mar 6 00:57:18.971260 kubelet[3452]: I0306 00:57:18.970617 3452 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 6 00:57:18.972792 kubelet[3452]: I0306 00:57:18.972180 3452 reconciler.go:29] "Reconciler: start to sync state" Mar 6 00:57:18.981462 kubelet[3452]: I0306 00:57:18.978807 3452 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 00:57:19.001426 kubelet[3452]: I0306 00:57:19.001360 3452 factory.go:223] Registration of the containerd container factory successfully Mar 6 00:57:19.001588 kubelet[3452]: I0306 00:57:19.001455 3452 factory.go:223] Registration of the systemd container factory successfully Mar 6 00:57:19.031257 kubelet[3452]: E0306 00:57:19.029230 3452 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 00:57:19.035097 kubelet[3452]: I0306 00:57:19.034646 3452 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 6 00:57:19.057271 kubelet[3452]: I0306 00:57:19.056567 3452 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 6 00:57:19.057271 kubelet[3452]: I0306 00:57:19.056620 3452 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 6 00:57:19.057271 kubelet[3452]: I0306 00:57:19.056666 3452 kubelet.go:2428] "Starting kubelet main sync loop" Mar 6 00:57:19.057271 kubelet[3452]: E0306 00:57:19.056741 3452 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 00:57:19.158821 kubelet[3452]: E0306 00:57:19.158404 3452 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 6 00:57:19.257522 kubelet[3452]: I0306 00:57:19.257464 3452 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 00:57:19.257522 kubelet[3452]: I0306 00:57:19.257511 3452 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 00:57:19.257738 kubelet[3452]: I0306 00:57:19.257559 3452 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:19.257908 kubelet[3452]: I0306 00:57:19.257863 3452 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 00:57:19.257987 kubelet[3452]: I0306 00:57:19.257905 3452 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 00:57:19.257987 kubelet[3452]: I0306 00:57:19.257946 3452 policy_none.go:49] "None policy: Start" Mar 6 00:57:19.257987 kubelet[3452]: I0306 00:57:19.257966 3452 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 6 00:57:19.258160 kubelet[3452]: I0306 00:57:19.257992 3452 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 6 00:57:19.258221 kubelet[3452]: I0306 00:57:19.258204 3452 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 6 00:57:19.258271 kubelet[3452]: I0306 00:57:19.258227 3452 policy_none.go:47] "Start" Mar 6 00:57:19.282679 kubelet[3452]: E0306 00:57:19.282424 3452 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 00:57:19.284105 kubelet[3452]: I0306 00:57:19.284030 3452 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 00:57:19.284105 kubelet[3452]: I0306 00:57:19.284075 3452 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 00:57:19.287980 kubelet[3452]: I0306 00:57:19.287921 3452 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 00:57:19.295021 kubelet[3452]: E0306 00:57:19.294453 3452 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 00:57:19.364438 kubelet[3452]: I0306 00:57:19.364372 3452 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-50" Mar 6 00:57:19.368458 kubelet[3452]: I0306 00:57:19.368392 3452 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:19.370233 kubelet[3452]: I0306 00:57:19.369658 3452 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:19.384868 kubelet[3452]: I0306 00:57:19.384358 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5981216b6b03901f6948e0b722ffec4-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-50\" (UID: \"a5981216b6b03901f6948e0b722ffec4\") " pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:19.384868 kubelet[3452]: I0306 00:57:19.384451 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5981216b6b03901f6948e0b722ffec4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-50\" (UID: \"a5981216b6b03901f6948e0b722ffec4\") " pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:19.384868 kubelet[3452]: I0306 00:57:19.384506 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:19.384868 kubelet[3452]: I0306 00:57:19.384546 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:19.384868 kubelet[3452]: I0306 00:57:19.384602 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:19.385262 kubelet[3452]: I0306 00:57:19.384638 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:19.385262 kubelet[3452]: I0306 00:57:19.384675 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5981216b6b03901f6948e0b722ffec4-ca-certs\") pod \"kube-apiserver-ip-172-31-16-50\" (UID: \"a5981216b6b03901f6948e0b722ffec4\") " pod="kube-system/kube-apiserver-ip-172-31-16-50" Mar 6 00:57:19.385262 kubelet[3452]: I0306 00:57:19.384711 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/428ca519ecd4f3ccc351bb3c188e8dc1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-50\" (UID: \"428ca519ecd4f3ccc351bb3c188e8dc1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-50" Mar 6 00:57:19.385262 kubelet[3452]: I0306 00:57:19.384765 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa4bbe22e36e5f14bbc1dd1e8657f1d5-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-50\" (UID: \"fa4bbe22e36e5f14bbc1dd1e8657f1d5\") " pod="kube-system/kube-scheduler-ip-172-31-16-50" Mar 6 00:57:19.418511 kubelet[3452]: I0306 00:57:19.418323 3452 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-50" Mar 6 00:57:19.445512 kubelet[3452]: I0306 00:57:19.445327 3452 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-50" Mar 6 00:57:19.446344 kubelet[3452]: I0306 00:57:19.446223 3452 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-50" Mar 6 00:57:19.789681 sudo[3466]: pam_unix(sudo:session): session closed for user root Mar 6 00:57:19.812484 kubelet[3452]: I0306 00:57:19.810261 3452 apiserver.go:52] "Watching apiserver" Mar 6 00:57:19.872088 kubelet[3452]: I0306 00:57:19.872004 3452 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 6 00:57:20.072106 kubelet[3452]: I0306 00:57:20.071214 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-50" podStartSLOduration=1.07118689 podStartE2EDuration="1.07118689s" podCreationTimestamp="2026-03-06 00:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:20.043923658 +0000 UTC m=+1.438601900" watchObservedRunningTime="2026-03-06 00:57:20.07118689 +0000 UTC m=+1.465865204" Mar 6 00:57:20.098362 kubelet[3452]: I0306 00:57:20.098255 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-50" podStartSLOduration=1.098230487 podStartE2EDuration="1.098230487s" podCreationTimestamp="2026-03-06 00:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:20.072459994 +0000 UTC m=+1.467138200" watchObservedRunningTime="2026-03-06 00:57:20.098230487 +0000 UTC m=+1.492908705" Mar 6 00:57:20.123440 kubelet[3452]: I0306 00:57:20.123353 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-50" podStartSLOduration=1.123325151 podStartE2EDuration="1.123325151s" podCreationTimestamp="2026-03-06 00:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:20.099373583 +0000 UTC m=+1.494051837" watchObservedRunningTime="2026-03-06 00:57:20.123325151 +0000 UTC m=+1.518003381" Mar 6 00:57:22.584182 kubelet[3452]: I0306 00:57:22.584112 3452 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 00:57:22.584997 containerd[2014]: time="2026-03-06T00:57:22.584697423Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 00:57:22.586676 kubelet[3452]: I0306 00:57:22.585602 3452 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 00:57:23.173563 sudo[2368]: pam_unix(sudo:session): session closed for user root Mar 6 00:57:23.256536 sshd[2367]: Connection closed by 68.220.241.50 port 51358 Mar 6 00:57:23.255517 sshd-session[2364]: pam_unix(sshd:session): session closed for user core Mar 6 00:57:23.263126 systemd[1]: sshd@6-172.31.16.50:22-68.220.241.50:51358.service: Deactivated successfully. Mar 6 00:57:23.270153 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 00:57:23.271761 systemd[1]: session-7.scope: Consumed 11.775s CPU time, 263.6M memory peak. Mar 6 00:57:23.279056 systemd-logind[1985]: Session 7 logged out. Waiting for processes to exit. Mar 6 00:57:23.282767 systemd-logind[1985]: Removed session 7. Mar 6 00:57:23.444111 kubelet[3452]: E0306 00:57:23.443916 3452 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-cznn9\" is forbidden: User \"system:node:ip-172-31-16-50\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-50' and this object" podUID="a787d14d-8a53-4b1d-becc-2a8fd5c355c7" pod="kube-system/kube-proxy-cznn9" Mar 6 00:57:23.449906 kubelet[3452]: E0306 00:57:23.449423 3452 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-16-50\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-50' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Mar 6 00:57:23.462499 systemd[1]: Created slice kubepods-besteffort-poda787d14d_8a53_4b1d_becc_2a8fd5c355c7.slice - libcontainer container kubepods-besteffort-poda787d14d_8a53_4b1d_becc_2a8fd5c355c7.slice. Mar 6 00:57:23.506173 systemd[1]: Created slice kubepods-burstable-pod3c7e02fc_17fa_467d_90bb_b07ba1446476.slice - libcontainer container kubepods-burstable-pod3c7e02fc_17fa_467d_90bb_b07ba1446476.slice. Mar 6 00:57:23.517681 kubelet[3452]: I0306 00:57:23.515004 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-config-path\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.517681 kubelet[3452]: I0306 00:57:23.515068 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a787d14d-8a53-4b1d-becc-2a8fd5c355c7-xtables-lock\") pod \"kube-proxy-cznn9\" (UID: \"a787d14d-8a53-4b1d-becc-2a8fd5c355c7\") " pod="kube-system/kube-proxy-cznn9" Mar 6 00:57:23.517681 kubelet[3452]: I0306 00:57:23.515132 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-bpf-maps\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.517681 kubelet[3452]: I0306 00:57:23.515165 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-lib-modules\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.517681 kubelet[3452]: I0306 00:57:23.515210 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-kernel\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.517681 kubelet[3452]: I0306 00:57:23.515252 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-hubble-tls\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518143 kubelet[3452]: I0306 00:57:23.515292 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a787d14d-8a53-4b1d-becc-2a8fd5c355c7-lib-modules\") pod \"kube-proxy-cznn9\" (UID: \"a787d14d-8a53-4b1d-becc-2a8fd5c355c7\") " pod="kube-system/kube-proxy-cznn9" Mar 6 00:57:23.518143 kubelet[3452]: I0306 00:57:23.515326 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c7e02fc-17fa-467d-90bb-b07ba1446476-clustermesh-secrets\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518143 kubelet[3452]: I0306 00:57:23.515362 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-net\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518143 kubelet[3452]: I0306 00:57:23.515394 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmpr\" (UniqueName: \"kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-kube-api-access-fzmpr\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518143 kubelet[3452]: I0306 00:57:23.515432 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a787d14d-8a53-4b1d-becc-2a8fd5c355c7-kube-proxy\") pod \"kube-proxy-cznn9\" (UID: \"a787d14d-8a53-4b1d-becc-2a8fd5c355c7\") " pod="kube-system/kube-proxy-cznn9" Mar 6 00:57:23.518395 kubelet[3452]: I0306 00:57:23.515465 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-hostproc\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518395 kubelet[3452]: I0306 00:57:23.515510 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-cgroup\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518395 kubelet[3452]: I0306 00:57:23.515547 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-etc-cni-netd\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518395 kubelet[3452]: I0306 00:57:23.515588 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6gx\" (UniqueName: \"kubernetes.io/projected/a787d14d-8a53-4b1d-becc-2a8fd5c355c7-kube-api-access-nq6gx\") pod \"kube-proxy-cznn9\" (UID: \"a787d14d-8a53-4b1d-becc-2a8fd5c355c7\") " pod="kube-system/kube-proxy-cznn9" Mar 6 00:57:23.518395 kubelet[3452]: I0306 00:57:23.515625 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-run\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518395 kubelet[3452]: I0306 00:57:23.515660 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cni-path\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.518677 kubelet[3452]: I0306 00:57:23.515699 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-xtables-lock\") pod \"cilium-p58kv\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " pod="kube-system/cilium-p58kv" Mar 6 00:57:23.818215 kubelet[3452]: I0306 00:57:23.818036 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7daf92f4-fee5-4a78-9965-479ba34a0e1c-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-m4zs8\" (UID: \"7daf92f4-fee5-4a78-9965-479ba34a0e1c\") " pod="kube-system/cilium-operator-6f9c7c5859-m4zs8" Mar 6 00:57:23.819188 kubelet[3452]: I0306 00:57:23.818873 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnrlx\" (UniqueName: \"kubernetes.io/projected/7daf92f4-fee5-4a78-9965-479ba34a0e1c-kube-api-access-wnrlx\") pod \"cilium-operator-6f9c7c5859-m4zs8\" (UID: \"7daf92f4-fee5-4a78-9965-479ba34a0e1c\") " pod="kube-system/cilium-operator-6f9c7c5859-m4zs8" Mar 6 00:57:23.822861 systemd[1]: Created slice kubepods-besteffort-pod7daf92f4_fee5_4a78_9965_479ba34a0e1c.slice - libcontainer container kubepods-besteffort-pod7daf92f4_fee5_4a78_9965_479ba34a0e1c.slice. Mar 6 00:57:23.828578 containerd[2014]: time="2026-03-06T00:57:23.828235373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p58kv,Uid:3c7e02fc-17fa-467d-90bb-b07ba1446476,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:23.901293 containerd[2014]: time="2026-03-06T00:57:23.901161485Z" level=info msg="connecting to shim 220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a" address="unix:///run/containerd/s/948c637c61c40d32c34848fc2a13e52f10fcfc7d1b858963ea2ccfeedd90eb56" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:23.975346 systemd[1]: Started cri-containerd-220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a.scope - libcontainer container 220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a. Mar 6 00:57:24.080797 containerd[2014]: time="2026-03-06T00:57:24.080505050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p58kv,Uid:3c7e02fc-17fa-467d-90bb-b07ba1446476,Namespace:kube-system,Attempt:0,} returns sandbox id \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\"" Mar 6 00:57:24.086864 containerd[2014]: time="2026-03-06T00:57:24.086746010Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 6 00:57:24.148539 containerd[2014]: time="2026-03-06T00:57:24.148448115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-m4zs8,Uid:7daf92f4-fee5-4a78-9965-479ba34a0e1c,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:24.181272 containerd[2014]: time="2026-03-06T00:57:24.181141191Z" level=info msg="connecting to shim c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015" address="unix:///run/containerd/s/608f05ecac6695d604dce2f255d05175d654d16a2a3f7a4c47cddb8699ad2255" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:24.245253 systemd[1]: Started cri-containerd-c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015.scope - libcontainer container c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015. Mar 6 00:57:24.381258 containerd[2014]: time="2026-03-06T00:57:24.381080620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cznn9,Uid:a787d14d-8a53-4b1d-becc-2a8fd5c355c7,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:24.384137 containerd[2014]: time="2026-03-06T00:57:24.384002152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-m4zs8,Uid:7daf92f4-fee5-4a78-9965-479ba34a0e1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\"" Mar 6 00:57:24.428517 containerd[2014]: time="2026-03-06T00:57:24.428380108Z" level=info msg="connecting to shim e37d77217379a09391ac09c2199ab03771ffeb8adc1850b2096e59b89db88950" address="unix:///run/containerd/s/270f5ede26d57ef7c5041af97b3293320feb5f1144a0a1821053fadd82ee5308" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:24.477212 systemd[1]: Started cri-containerd-e37d77217379a09391ac09c2199ab03771ffeb8adc1850b2096e59b89db88950.scope - libcontainer container e37d77217379a09391ac09c2199ab03771ffeb8adc1850b2096e59b89db88950. Mar 6 00:57:24.538519 containerd[2014]: time="2026-03-06T00:57:24.537563249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cznn9,Uid:a787d14d-8a53-4b1d-becc-2a8fd5c355c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e37d77217379a09391ac09c2199ab03771ffeb8adc1850b2096e59b89db88950\"" Mar 6 00:57:24.554089 containerd[2014]: time="2026-03-06T00:57:24.554008589Z" level=info msg="CreateContainer within sandbox \"e37d77217379a09391ac09c2199ab03771ffeb8adc1850b2096e59b89db88950\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 00:57:24.587686 containerd[2014]: time="2026-03-06T00:57:24.587628605Z" level=info msg="Container 76d280e007770cf9b9edad28a8b421b96cb5cfeb2c36feb1911e0c4f8a650efe: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:24.605428 containerd[2014]: time="2026-03-06T00:57:24.605327633Z" level=info msg="CreateContainer within sandbox \"e37d77217379a09391ac09c2199ab03771ffeb8adc1850b2096e59b89db88950\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"76d280e007770cf9b9edad28a8b421b96cb5cfeb2c36feb1911e0c4f8a650efe\"" Mar 6 00:57:24.607252 containerd[2014]: time="2026-03-06T00:57:24.607198217Z" level=info msg="StartContainer for \"76d280e007770cf9b9edad28a8b421b96cb5cfeb2c36feb1911e0c4f8a650efe\"" Mar 6 00:57:24.611698 containerd[2014]: time="2026-03-06T00:57:24.611634497Z" level=info msg="connecting to shim 76d280e007770cf9b9edad28a8b421b96cb5cfeb2c36feb1911e0c4f8a650efe" address="unix:///run/containerd/s/270f5ede26d57ef7c5041af97b3293320feb5f1144a0a1821053fadd82ee5308" protocol=ttrpc version=3 Mar 6 00:57:24.678175 systemd[1]: Started cri-containerd-76d280e007770cf9b9edad28a8b421b96cb5cfeb2c36feb1911e0c4f8a650efe.scope - libcontainer container 76d280e007770cf9b9edad28a8b421b96cb5cfeb2c36feb1911e0c4f8a650efe. Mar 6 00:57:24.816236 containerd[2014]: time="2026-03-06T00:57:24.816166410Z" level=info msg="StartContainer for \"76d280e007770cf9b9edad28a8b421b96cb5cfeb2c36feb1911e0c4f8a650efe\" returns successfully" Mar 6 00:57:26.220955 kubelet[3452]: I0306 00:57:26.220748 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cznn9" podStartSLOduration=3.220680401 podStartE2EDuration="3.220680401s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:25.239329612 +0000 UTC m=+6.634007842" watchObservedRunningTime="2026-03-06 00:57:26.220680401 +0000 UTC m=+7.615358631" Mar 6 00:57:29.717078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725763743.mount: Deactivated successfully. Mar 6 00:57:32.362152 containerd[2014]: time="2026-03-06T00:57:32.362061059Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:32.364416 containerd[2014]: time="2026-03-06T00:57:32.364257503Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 6 00:57:32.367881 containerd[2014]: time="2026-03-06T00:57:32.367161443Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:32.372744 containerd[2014]: time="2026-03-06T00:57:32.372654132Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.285817954s" Mar 6 00:57:32.372744 containerd[2014]: time="2026-03-06T00:57:32.372726648Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 6 00:57:32.374840 containerd[2014]: time="2026-03-06T00:57:32.374761116Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 6 00:57:32.383942 containerd[2014]: time="2026-03-06T00:57:32.383638908Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 00:57:32.406908 containerd[2014]: time="2026-03-06T00:57:32.405529032Z" level=info msg="Container 1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:32.423782 containerd[2014]: time="2026-03-06T00:57:32.423707052Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\"" Mar 6 00:57:32.425554 containerd[2014]: time="2026-03-06T00:57:32.425491200Z" level=info msg="StartContainer for \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\"" Mar 6 00:57:32.428112 containerd[2014]: time="2026-03-06T00:57:32.427978452Z" level=info msg="connecting to shim 1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea" address="unix:///run/containerd/s/948c637c61c40d32c34848fc2a13e52f10fcfc7d1b858963ea2ccfeedd90eb56" protocol=ttrpc version=3 Mar 6 00:57:32.475125 systemd[1]: Started cri-containerd-1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea.scope - libcontainer container 1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea. Mar 6 00:57:32.543671 containerd[2014]: time="2026-03-06T00:57:32.543570264Z" level=info msg="StartContainer for \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\" returns successfully" Mar 6 00:57:32.571193 systemd[1]: cri-containerd-1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea.scope: Deactivated successfully. Mar 6 00:57:32.571797 systemd[1]: cri-containerd-1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea.scope: Consumed 52ms CPU time, 6.3M memory peak, 3.1M written to disk. Mar 6 00:57:32.577093 containerd[2014]: time="2026-03-06T00:57:32.576922477Z" level=info msg="received container exit event container_id:\"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\" id:\"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\" pid:3869 exited_at:{seconds:1772758652 nanos:576438661}" Mar 6 00:57:33.405226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea-rootfs.mount: Deactivated successfully. Mar 6 00:57:34.218497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785198244.mount: Deactivated successfully. Mar 6 00:57:34.296146 containerd[2014]: time="2026-03-06T00:57:34.296077549Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 00:57:34.317343 containerd[2014]: time="2026-03-06T00:57:34.317281153Z" level=info msg="Container f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:34.329294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034494026.mount: Deactivated successfully. Mar 6 00:57:34.338161 containerd[2014]: time="2026-03-06T00:57:34.338076793Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\"" Mar 6 00:57:34.339711 containerd[2014]: time="2026-03-06T00:57:34.339483061Z" level=info msg="StartContainer for \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\"" Mar 6 00:57:34.342370 containerd[2014]: time="2026-03-06T00:57:34.342158749Z" level=info msg="connecting to shim f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c" address="unix:///run/containerd/s/948c637c61c40d32c34848fc2a13e52f10fcfc7d1b858963ea2ccfeedd90eb56" protocol=ttrpc version=3 Mar 6 00:57:34.376207 systemd[1]: Started cri-containerd-f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c.scope - libcontainer container f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c. Mar 6 00:57:34.455992 containerd[2014]: time="2026-03-06T00:57:34.455932418Z" level=info msg="StartContainer for \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\" returns successfully" Mar 6 00:57:34.481487 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 00:57:34.483615 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:57:34.485163 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 6 00:57:34.490304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 00:57:34.495670 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 00:57:34.498099 systemd[1]: cri-containerd-f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c.scope: Deactivated successfully. Mar 6 00:57:34.505270 containerd[2014]: time="2026-03-06T00:57:34.504223622Z" level=info msg="received container exit event container_id:\"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\" id:\"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\" pid:3925 exited_at:{seconds:1772758654 nanos:502287206}" Mar 6 00:57:34.548975 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:57:34.568487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c-rootfs.mount: Deactivated successfully. Mar 6 00:57:35.317313 containerd[2014]: time="2026-03-06T00:57:35.317206298Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 00:57:35.386589 containerd[2014]: time="2026-03-06T00:57:35.386526782Z" level=info msg="Container 8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:35.406528 containerd[2014]: time="2026-03-06T00:57:35.406469055Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\"" Mar 6 00:57:35.409659 containerd[2014]: time="2026-03-06T00:57:35.408649455Z" level=info msg="StartContainer for \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\"" Mar 6 00:57:35.413978 containerd[2014]: time="2026-03-06T00:57:35.413920323Z" level=info msg="connecting to shim 8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59" address="unix:///run/containerd/s/948c637c61c40d32c34848fc2a13e52f10fcfc7d1b858963ea2ccfeedd90eb56" protocol=ttrpc version=3 Mar 6 00:57:35.477212 systemd[1]: Started cri-containerd-8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59.scope - libcontainer container 8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59. Mar 6 00:57:35.646913 systemd[1]: cri-containerd-8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59.scope: Deactivated successfully. Mar 6 00:57:35.657050 containerd[2014]: time="2026-03-06T00:57:35.656421964Z" level=info msg="received container exit event container_id:\"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\" id:\"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\" pid:3976 exited_at:{seconds:1772758655 nanos:655753456}" Mar 6 00:57:35.659568 containerd[2014]: time="2026-03-06T00:57:35.659484868Z" level=info msg="StartContainer for \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\" returns successfully" Mar 6 00:57:35.735197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59-rootfs.mount: Deactivated successfully. Mar 6 00:57:36.027340 containerd[2014]: time="2026-03-06T00:57:36.027228734Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:36.028690 containerd[2014]: time="2026-03-06T00:57:36.028613798Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 6 00:57:36.031145 containerd[2014]: time="2026-03-06T00:57:36.031036526Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:36.034358 containerd[2014]: time="2026-03-06T00:57:36.034274882Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.659452698s" Mar 6 00:57:36.034358 containerd[2014]: time="2026-03-06T00:57:36.034344818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 6 00:57:36.041527 containerd[2014]: time="2026-03-06T00:57:36.041457266Z" level=info msg="CreateContainer within sandbox \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 6 00:57:36.054341 containerd[2014]: time="2026-03-06T00:57:36.053938874Z" level=info msg="Container eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:36.061641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530591976.mount: Deactivated successfully. Mar 6 00:57:36.071866 containerd[2014]: time="2026-03-06T00:57:36.071744162Z" level=info msg="CreateContainer within sandbox \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\"" Mar 6 00:57:36.075189 containerd[2014]: time="2026-03-06T00:57:36.074988518Z" level=info msg="StartContainer for \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\"" Mar 6 00:57:36.078069 containerd[2014]: time="2026-03-06T00:57:36.077660174Z" level=info msg="connecting to shim eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4" address="unix:///run/containerd/s/608f05ecac6695d604dce2f255d05175d654d16a2a3f7a4c47cddb8699ad2255" protocol=ttrpc version=3 Mar 6 00:57:36.111172 systemd[1]: Started cri-containerd-eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4.scope - libcontainer container eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4. Mar 6 00:57:36.176170 containerd[2014]: time="2026-03-06T00:57:36.176094890Z" level=info msg="StartContainer for \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" returns successfully" Mar 6 00:57:36.329327 containerd[2014]: time="2026-03-06T00:57:36.329169675Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 00:57:36.353955 containerd[2014]: time="2026-03-06T00:57:36.352503927Z" level=info msg="Container 368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:36.371991 containerd[2014]: time="2026-03-06T00:57:36.371929743Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\"" Mar 6 00:57:36.376189 containerd[2014]: time="2026-03-06T00:57:36.376119339Z" level=info msg="StartContainer for \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\"" Mar 6 00:57:36.380793 containerd[2014]: time="2026-03-06T00:57:36.380707035Z" level=info msg="connecting to shim 368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec" address="unix:///run/containerd/s/948c637c61c40d32c34848fc2a13e52f10fcfc7d1b858963ea2ccfeedd90eb56" protocol=ttrpc version=3 Mar 6 00:57:36.461175 systemd[1]: Started cri-containerd-368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec.scope - libcontainer container 368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec. Mar 6 00:57:36.572666 systemd[1]: cri-containerd-368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec.scope: Deactivated successfully. Mar 6 00:57:36.585795 containerd[2014]: time="2026-03-06T00:57:36.585593740Z" level=info msg="received container exit event container_id:\"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\" id:\"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\" pid:4053 exited_at:{seconds:1772758656 nanos:578254648}" Mar 6 00:57:36.618156 containerd[2014]: time="2026-03-06T00:57:36.618078533Z" level=info msg="StartContainer for \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\" returns successfully" Mar 6 00:57:36.673218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec-rootfs.mount: Deactivated successfully. Mar 6 00:57:37.354551 containerd[2014]: time="2026-03-06T00:57:37.354460048Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 00:57:37.380295 containerd[2014]: time="2026-03-06T00:57:37.378060532Z" level=info msg="Container 181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:37.404456 containerd[2014]: time="2026-03-06T00:57:37.404362553Z" level=info msg="CreateContainer within sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\"" Mar 6 00:57:37.405914 containerd[2014]: time="2026-03-06T00:57:37.405810917Z" level=info msg="StartContainer for \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\"" Mar 6 00:57:37.409439 containerd[2014]: time="2026-03-06T00:57:37.409185137Z" level=info msg="connecting to shim 181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43" address="unix:///run/containerd/s/948c637c61c40d32c34848fc2a13e52f10fcfc7d1b858963ea2ccfeedd90eb56" protocol=ttrpc version=3 Mar 6 00:57:37.490150 systemd[1]: Started cri-containerd-181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43.scope - libcontainer container 181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43. Mar 6 00:57:37.521084 kubelet[3452]: I0306 00:57:37.520963 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-m4zs8" podStartSLOduration=2.873079243 podStartE2EDuration="14.520939733s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="2026-03-06 00:57:24.38776258 +0000 UTC m=+5.782440786" lastFinishedPulling="2026-03-06 00:57:36.03562307 +0000 UTC m=+17.430301276" observedRunningTime="2026-03-06 00:57:36.438333028 +0000 UTC m=+17.833011258" watchObservedRunningTime="2026-03-06 00:57:37.520939733 +0000 UTC m=+18.915617951" Mar 6 00:57:37.673876 containerd[2014]: time="2026-03-06T00:57:37.672852306Z" level=info msg="StartContainer for \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" returns successfully" Mar 6 00:57:38.194513 kubelet[3452]: I0306 00:57:38.194449 3452 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 6 00:57:38.328697 systemd[1]: Created slice kubepods-burstable-podc43975f8_6a2e_40d5_9dbf_3dee054604d4.slice - libcontainer container kubepods-burstable-podc43975f8_6a2e_40d5_9dbf_3dee054604d4.slice. Mar 6 00:57:38.341711 kubelet[3452]: I0306 00:57:38.341340 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c43975f8-6a2e-40d5-9dbf-3dee054604d4-config-volume\") pod \"coredns-66bc5c9577-wvhgz\" (UID: \"c43975f8-6a2e-40d5-9dbf-3dee054604d4\") " pod="kube-system/coredns-66bc5c9577-wvhgz" Mar 6 00:57:38.341711 kubelet[3452]: I0306 00:57:38.341425 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcr55\" (UniqueName: \"kubernetes.io/projected/c43975f8-6a2e-40d5-9dbf-3dee054604d4-kube-api-access-gcr55\") pod \"coredns-66bc5c9577-wvhgz\" (UID: \"c43975f8-6a2e-40d5-9dbf-3dee054604d4\") " pod="kube-system/coredns-66bc5c9577-wvhgz" Mar 6 00:57:38.341711 kubelet[3452]: I0306 00:57:38.341467 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87f10096-8b7d-42fe-9f2a-d390d62a8d1a-config-volume\") pod \"coredns-66bc5c9577-75j4h\" (UID: \"87f10096-8b7d-42fe-9f2a-d390d62a8d1a\") " pod="kube-system/coredns-66bc5c9577-75j4h" Mar 6 00:57:38.341711 kubelet[3452]: I0306 00:57:38.341506 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjws\" (UniqueName: \"kubernetes.io/projected/87f10096-8b7d-42fe-9f2a-d390d62a8d1a-kube-api-access-jrjws\") pod \"coredns-66bc5c9577-75j4h\" (UID: \"87f10096-8b7d-42fe-9f2a-d390d62a8d1a\") " pod="kube-system/coredns-66bc5c9577-75j4h" Mar 6 00:57:38.350914 systemd[1]: Created slice kubepods-burstable-pod87f10096_8b7d_42fe_9f2a_d390d62a8d1a.slice - libcontainer container kubepods-burstable-pod87f10096_8b7d_42fe_9f2a_d390d62a8d1a.slice. Mar 6 00:57:38.646676 containerd[2014]: time="2026-03-06T00:57:38.646096867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wvhgz,Uid:c43975f8-6a2e-40d5-9dbf-3dee054604d4,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:38.677925 containerd[2014]: time="2026-03-06T00:57:38.676762927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-75j4h,Uid:87f10096-8b7d-42fe-9f2a-d390d62a8d1a,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:41.595565 (udev-worker)[4182]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:57:41.598175 systemd-networkd[1895]: cilium_host: Link UP Mar 6 00:57:41.599978 systemd-networkd[1895]: cilium_net: Link UP Mar 6 00:57:41.601262 (udev-worker)[4217]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:57:41.602322 systemd-networkd[1895]: cilium_net: Gained carrier Mar 6 00:57:41.602800 systemd-networkd[1895]: cilium_host: Gained carrier Mar 6 00:57:41.609539 systemd-networkd[1895]: cilium_net: Gained IPv6LL Mar 6 00:57:41.680106 systemd-networkd[1895]: cilium_host: Gained IPv6LL Mar 6 00:57:41.792030 (udev-worker)[4229]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:57:41.802252 systemd-networkd[1895]: cilium_vxlan: Link UP Mar 6 00:57:41.802273 systemd-networkd[1895]: cilium_vxlan: Gained carrier Mar 6 00:57:42.432869 kernel: NET: Registered PF_ALG protocol family Mar 6 00:57:42.929010 systemd-networkd[1895]: cilium_vxlan: Gained IPv6LL Mar 6 00:57:44.098691 (udev-worker)[4227]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:57:44.101546 systemd-networkd[1895]: lxc_health: Link UP Mar 6 00:57:44.118081 systemd-networkd[1895]: lxc_health: Gained carrier Mar 6 00:57:44.753439 (udev-worker)[4546]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:57:44.771882 kernel: eth0: renamed from tmp27f0a Mar 6 00:57:44.773073 kernel: eth0: renamed from tmp5a144 Mar 6 00:57:44.776912 systemd-networkd[1895]: lxc934b572be6e8: Link UP Mar 6 00:57:44.782172 systemd-networkd[1895]: lxccfa3aaabaa6a: Link UP Mar 6 00:57:44.785819 systemd-networkd[1895]: lxc934b572be6e8: Gained carrier Mar 6 00:57:44.786203 systemd-networkd[1895]: lxccfa3aaabaa6a: Gained carrier Mar 6 00:57:45.744058 systemd-networkd[1895]: lxc_health: Gained IPv6LL Mar 6 00:57:45.858648 kubelet[3452]: I0306 00:57:45.858550 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p58kv" podStartSLOduration=14.570243893 podStartE2EDuration="22.858530019s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="2026-03-06 00:57:24.08563871 +0000 UTC m=+5.480316928" lastFinishedPulling="2026-03-06 00:57:32.373924848 +0000 UTC m=+13.768603054" observedRunningTime="2026-03-06 00:57:38.429195306 +0000 UTC m=+19.823873536" watchObservedRunningTime="2026-03-06 00:57:45.858530019 +0000 UTC m=+27.253208237" Mar 6 00:57:46.576252 systemd-networkd[1895]: lxc934b572be6e8: Gained IPv6LL Mar 6 00:57:46.704216 systemd-networkd[1895]: lxccfa3aaabaa6a: Gained IPv6LL Mar 6 00:57:49.635202 ntpd[2211]: Listen normally on 6 cilium_host 192.168.0.44:123 Mar 6 00:57:49.636614 ntpd[2211]: 6 Mar 00:57:49 ntpd[2211]: Listen normally on 6 cilium_host 192.168.0.44:123 Mar 6 00:57:49.636614 ntpd[2211]: 6 Mar 00:57:49 ntpd[2211]: Listen normally on 7 cilium_net [fe80::c0b6:43ff:fe90:887a%4]:123 Mar 6 00:57:49.636614 ntpd[2211]: 6 Mar 00:57:49 ntpd[2211]: Listen normally on 8 cilium_host [fe80::d4bf:77ff:fe2d:b565%5]:123 Mar 6 00:57:49.636614 ntpd[2211]: 6 Mar 00:57:49 ntpd[2211]: Listen normally on 9 cilium_vxlan [fe80::5c83:edff:feef:4ea2%6]:123 Mar 6 00:57:49.636614 ntpd[2211]: 6 Mar 00:57:49 ntpd[2211]: Listen normally on 10 lxc_health [fe80::b415:b7ff:fef1:4925%8]:123 Mar 6 00:57:49.636614 ntpd[2211]: 6 Mar 00:57:49 ntpd[2211]: Listen normally on 11 lxc934b572be6e8 [fe80::3400:c4ff:fe8b:1ab1%10]:123 Mar 6 00:57:49.636614 ntpd[2211]: 6 Mar 00:57:49 ntpd[2211]: Listen normally on 12 lxccfa3aaabaa6a [fe80::186b:bdff:feba:b8a5%12]:123 Mar 6 00:57:49.635287 ntpd[2211]: Listen normally on 7 cilium_net [fe80::c0b6:43ff:fe90:887a%4]:123 Mar 6 00:57:49.635334 ntpd[2211]: Listen normally on 8 cilium_host [fe80::d4bf:77ff:fe2d:b565%5]:123 Mar 6 00:57:49.635384 ntpd[2211]: Listen normally on 9 cilium_vxlan [fe80::5c83:edff:feef:4ea2%6]:123 Mar 6 00:57:49.635427 ntpd[2211]: Listen normally on 10 lxc_health [fe80::b415:b7ff:fef1:4925%8]:123 Mar 6 00:57:49.635470 ntpd[2211]: Listen normally on 11 lxc934b572be6e8 [fe80::3400:c4ff:fe8b:1ab1%10]:123 Mar 6 00:57:49.635513 ntpd[2211]: Listen normally on 12 lxccfa3aaabaa6a [fe80::186b:bdff:feba:b8a5%12]:123 Mar 6 00:57:54.270181 containerd[2014]: time="2026-03-06T00:57:54.270088220Z" level=info msg="connecting to shim 5a1444203c5519c01bc7dd0fae270b257cd20d101b316b77975a1ae15f827aa7" address="unix:///run/containerd/s/d5b6191df8fe199d7fb9cac1b1fd62e94acd3462f88b0f677b3d29371be3d0b8" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:54.272462 containerd[2014]: time="2026-03-06T00:57:54.272308916Z" level=info msg="connecting to shim 27f0a22651637024ce8341d58c052c0d4a31a72bcf3b37d3388e7039ee7f30b3" address="unix:///run/containerd/s/63c07907fdf0c9f2ea5d8ffd20439aacf87b9c6c7090f646e492fd0ec0b36c5a" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:54.384273 systemd[1]: Started cri-containerd-27f0a22651637024ce8341d58c052c0d4a31a72bcf3b37d3388e7039ee7f30b3.scope - libcontainer container 27f0a22651637024ce8341d58c052c0d4a31a72bcf3b37d3388e7039ee7f30b3. Mar 6 00:57:54.398292 systemd[1]: Started cri-containerd-5a1444203c5519c01bc7dd0fae270b257cd20d101b316b77975a1ae15f827aa7.scope - libcontainer container 5a1444203c5519c01bc7dd0fae270b257cd20d101b316b77975a1ae15f827aa7. Mar 6 00:57:54.517008 containerd[2014]: time="2026-03-06T00:57:54.516539782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wvhgz,Uid:c43975f8-6a2e-40d5-9dbf-3dee054604d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"27f0a22651637024ce8341d58c052c0d4a31a72bcf3b37d3388e7039ee7f30b3\"" Mar 6 00:57:54.547257 containerd[2014]: time="2026-03-06T00:57:54.546922366Z" level=info msg="CreateContainer within sandbox \"27f0a22651637024ce8341d58c052c0d4a31a72bcf3b37d3388e7039ee7f30b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 00:57:54.582920 containerd[2014]: time="2026-03-06T00:57:54.579235642Z" level=info msg="Container 68828bacac1db5ee172d5326fe861822e04b38f9103fc2e1f4db45b75cc8e2cc: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:54.604276 containerd[2014]: time="2026-03-06T00:57:54.604182862Z" level=info msg="CreateContainer within sandbox \"27f0a22651637024ce8341d58c052c0d4a31a72bcf3b37d3388e7039ee7f30b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68828bacac1db5ee172d5326fe861822e04b38f9103fc2e1f4db45b75cc8e2cc\"" Mar 6 00:57:54.607374 containerd[2014]: time="2026-03-06T00:57:54.607142206Z" level=info msg="StartContainer for \"68828bacac1db5ee172d5326fe861822e04b38f9103fc2e1f4db45b75cc8e2cc\"" Mar 6 00:57:54.616894 containerd[2014]: time="2026-03-06T00:57:54.616720258Z" level=info msg="connecting to shim 68828bacac1db5ee172d5326fe861822e04b38f9103fc2e1f4db45b75cc8e2cc" address="unix:///run/containerd/s/63c07907fdf0c9f2ea5d8ffd20439aacf87b9c6c7090f646e492fd0ec0b36c5a" protocol=ttrpc version=3 Mar 6 00:57:54.644420 containerd[2014]: time="2026-03-06T00:57:54.644278426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-75j4h,Uid:87f10096-8b7d-42fe-9f2a-d390d62a8d1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a1444203c5519c01bc7dd0fae270b257cd20d101b316b77975a1ae15f827aa7\"" Mar 6 00:57:54.660437 containerd[2014]: time="2026-03-06T00:57:54.660373870Z" level=info msg="CreateContainer within sandbox \"5a1444203c5519c01bc7dd0fae270b257cd20d101b316b77975a1ae15f827aa7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 00:57:54.680303 systemd[1]: Started cri-containerd-68828bacac1db5ee172d5326fe861822e04b38f9103fc2e1f4db45b75cc8e2cc.scope - libcontainer container 68828bacac1db5ee172d5326fe861822e04b38f9103fc2e1f4db45b75cc8e2cc. Mar 6 00:57:54.687102 containerd[2014]: time="2026-03-06T00:57:54.687027214Z" level=info msg="Container 83224f4265ad158123838ce8922d924e8fb70fb6c20e474545c91a3fd70813b5: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:54.696754 containerd[2014]: time="2026-03-06T00:57:54.696651238Z" level=info msg="CreateContainer within sandbox \"5a1444203c5519c01bc7dd0fae270b257cd20d101b316b77975a1ae15f827aa7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83224f4265ad158123838ce8922d924e8fb70fb6c20e474545c91a3fd70813b5\"" Mar 6 00:57:54.698603 containerd[2014]: time="2026-03-06T00:57:54.698546398Z" level=info msg="StartContainer for \"83224f4265ad158123838ce8922d924e8fb70fb6c20e474545c91a3fd70813b5\"" Mar 6 00:57:54.700597 containerd[2014]: time="2026-03-06T00:57:54.700481782Z" level=info msg="connecting to shim 83224f4265ad158123838ce8922d924e8fb70fb6c20e474545c91a3fd70813b5" address="unix:///run/containerd/s/d5b6191df8fe199d7fb9cac1b1fd62e94acd3462f88b0f677b3d29371be3d0b8" protocol=ttrpc version=3 Mar 6 00:57:54.745191 systemd[1]: Started cri-containerd-83224f4265ad158123838ce8922d924e8fb70fb6c20e474545c91a3fd70813b5.scope - libcontainer container 83224f4265ad158123838ce8922d924e8fb70fb6c20e474545c91a3fd70813b5. Mar 6 00:57:54.788056 containerd[2014]: time="2026-03-06T00:57:54.787976531Z" level=info msg="StartContainer for \"68828bacac1db5ee172d5326fe861822e04b38f9103fc2e1f4db45b75cc8e2cc\" returns successfully" Mar 6 00:57:54.853119 containerd[2014]: time="2026-03-06T00:57:54.852923687Z" level=info msg="StartContainer for \"83224f4265ad158123838ce8922d924e8fb70fb6c20e474545c91a3fd70813b5\" returns successfully" Mar 6 00:57:55.213722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039429942.mount: Deactivated successfully. Mar 6 00:57:55.487985 kubelet[3452]: I0306 00:57:55.487273 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wvhgz" podStartSLOduration=32.487250482 podStartE2EDuration="32.487250482s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:55.482993026 +0000 UTC m=+36.877671256" watchObservedRunningTime="2026-03-06 00:57:55.487250482 +0000 UTC m=+36.881928880" Mar 6 00:57:55.549647 kubelet[3452]: I0306 00:57:55.549522 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-75j4h" podStartSLOduration=32.549497951 podStartE2EDuration="32.549497951s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:55.545767379 +0000 UTC m=+36.940445597" watchObservedRunningTime="2026-03-06 00:57:55.549497951 +0000 UTC m=+36.944176157" Mar 6 00:58:06.439384 systemd[1]: Started sshd@7-172.31.16.50:22-68.220.241.50:34608.service - OpenSSH per-connection server daemon (68.220.241.50:34608). Mar 6 00:58:06.929901 sshd[4755]: Accepted publickey for core from 68.220.241.50 port 34608 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:06.932022 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:06.943014 systemd-logind[1985]: New session 8 of user core. Mar 6 00:58:06.955204 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 00:58:07.351880 sshd[4758]: Connection closed by 68.220.241.50 port 34608 Mar 6 00:58:07.352914 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:07.365393 systemd[1]: sshd@7-172.31.16.50:22-68.220.241.50:34608.service: Deactivated successfully. Mar 6 00:58:07.372679 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 00:58:07.379677 systemd-logind[1985]: Session 8 logged out. Waiting for processes to exit. Mar 6 00:58:07.384528 systemd-logind[1985]: Removed session 8. Mar 6 00:58:12.452421 systemd[1]: Started sshd@8-172.31.16.50:22-68.220.241.50:36228.service - OpenSSH per-connection server daemon (68.220.241.50:36228). Mar 6 00:58:12.927036 sshd[4773]: Accepted publickey for core from 68.220.241.50 port 36228 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:12.930368 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:12.942267 systemd-logind[1985]: New session 9 of user core. Mar 6 00:58:12.952247 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 00:58:13.310860 sshd[4776]: Connection closed by 68.220.241.50 port 36228 Mar 6 00:58:13.311881 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:13.319894 systemd[1]: sshd@8-172.31.16.50:22-68.220.241.50:36228.service: Deactivated successfully. Mar 6 00:58:13.323961 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 00:58:13.328677 systemd-logind[1985]: Session 9 logged out. Waiting for processes to exit. Mar 6 00:58:13.334067 systemd-logind[1985]: Removed session 9. Mar 6 00:58:18.413942 systemd[1]: Started sshd@9-172.31.16.50:22-68.220.241.50:36242.service - OpenSSH per-connection server daemon (68.220.241.50:36242). Mar 6 00:58:18.897703 sshd[4789]: Accepted publickey for core from 68.220.241.50 port 36242 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:18.901255 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:18.911019 systemd-logind[1985]: New session 10 of user core. Mar 6 00:58:18.920326 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 00:58:19.283344 sshd[4792]: Connection closed by 68.220.241.50 port 36242 Mar 6 00:58:19.284377 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:19.293442 systemd[1]: sshd@9-172.31.16.50:22-68.220.241.50:36242.service: Deactivated successfully. Mar 6 00:58:19.298435 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 00:58:19.301136 systemd-logind[1985]: Session 10 logged out. Waiting for processes to exit. Mar 6 00:58:19.304762 systemd-logind[1985]: Removed session 10. Mar 6 00:58:24.380086 systemd[1]: Started sshd@10-172.31.16.50:22-68.220.241.50:37924.service - OpenSSH per-connection server daemon (68.220.241.50:37924). Mar 6 00:58:24.853903 sshd[4807]: Accepted publickey for core from 68.220.241.50 port 37924 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:24.856064 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:24.866167 systemd-logind[1985]: New session 11 of user core. Mar 6 00:58:24.874314 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 00:58:25.250520 sshd[4811]: Connection closed by 68.220.241.50 port 37924 Mar 6 00:58:25.250359 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:25.260040 systemd[1]: sshd@10-172.31.16.50:22-68.220.241.50:37924.service: Deactivated successfully. Mar 6 00:58:25.266591 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 00:58:25.268940 systemd-logind[1985]: Session 11 logged out. Waiting for processes to exit. Mar 6 00:58:25.274783 systemd-logind[1985]: Removed session 11. Mar 6 00:58:30.345271 systemd[1]: Started sshd@11-172.31.16.50:22-68.220.241.50:37928.service - OpenSSH per-connection server daemon (68.220.241.50:37928). Mar 6 00:58:30.811217 sshd[4826]: Accepted publickey for core from 68.220.241.50 port 37928 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:30.814503 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:30.824654 systemd-logind[1985]: New session 12 of user core. Mar 6 00:58:30.834146 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 00:58:31.183085 sshd[4829]: Connection closed by 68.220.241.50 port 37928 Mar 6 00:58:31.184211 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:31.193155 systemd[1]: sshd@11-172.31.16.50:22-68.220.241.50:37928.service: Deactivated successfully. Mar 6 00:58:31.199788 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 00:58:31.203952 systemd-logind[1985]: Session 12 logged out. Waiting for processes to exit. Mar 6 00:58:31.207789 systemd-logind[1985]: Removed session 12. Mar 6 00:58:31.283168 systemd[1]: Started sshd@12-172.31.16.50:22-68.220.241.50:37932.service - OpenSSH per-connection server daemon (68.220.241.50:37932). Mar 6 00:58:31.746523 sshd[4841]: Accepted publickey for core from 68.220.241.50 port 37932 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:31.749163 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:31.762220 systemd-logind[1985]: New session 13 of user core. Mar 6 00:58:31.768115 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 00:58:32.199682 sshd[4844]: Connection closed by 68.220.241.50 port 37932 Mar 6 00:58:32.200781 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:32.212049 systemd[1]: sshd@12-172.31.16.50:22-68.220.241.50:37932.service: Deactivated successfully. Mar 6 00:58:32.217458 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 00:58:32.221992 systemd-logind[1985]: Session 13 logged out. Waiting for processes to exit. Mar 6 00:58:32.226734 systemd-logind[1985]: Removed session 13. Mar 6 00:58:32.301438 systemd[1]: Started sshd@13-172.31.16.50:22-68.220.241.50:38722.service - OpenSSH per-connection server daemon (68.220.241.50:38722). Mar 6 00:58:32.796559 sshd[4854]: Accepted publickey for core from 68.220.241.50 port 38722 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:32.798961 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:32.806940 systemd-logind[1985]: New session 14 of user core. Mar 6 00:58:32.820239 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 00:58:33.181708 sshd[4857]: Connection closed by 68.220.241.50 port 38722 Mar 6 00:58:33.183006 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:33.190285 systemd[1]: sshd@13-172.31.16.50:22-68.220.241.50:38722.service: Deactivated successfully. Mar 6 00:58:33.194281 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 00:58:33.196754 systemd-logind[1985]: Session 14 logged out. Waiting for processes to exit. Mar 6 00:58:33.199892 systemd-logind[1985]: Removed session 14. Mar 6 00:58:38.276579 systemd[1]: Started sshd@14-172.31.16.50:22-68.220.241.50:38732.service - OpenSSH per-connection server daemon (68.220.241.50:38732). Mar 6 00:58:38.763095 sshd[4870]: Accepted publickey for core from 68.220.241.50 port 38732 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:38.765732 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:38.773662 systemd-logind[1985]: New session 15 of user core. Mar 6 00:58:38.783214 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 00:58:39.153535 sshd[4873]: Connection closed by 68.220.241.50 port 38732 Mar 6 00:58:39.153256 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:39.164322 systemd[1]: sshd@14-172.31.16.50:22-68.220.241.50:38732.service: Deactivated successfully. Mar 6 00:58:39.169729 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 00:58:39.173401 systemd-logind[1985]: Session 15 logged out. Waiting for processes to exit. Mar 6 00:58:39.177950 systemd-logind[1985]: Removed session 15. Mar 6 00:58:44.253387 systemd[1]: Started sshd@15-172.31.16.50:22-68.220.241.50:52374.service - OpenSSH per-connection server daemon (68.220.241.50:52374). Mar 6 00:58:44.724630 sshd[4886]: Accepted publickey for core from 68.220.241.50 port 52374 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:44.727129 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:44.735799 systemd-logind[1985]: New session 16 of user core. Mar 6 00:58:44.748199 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 00:58:45.109994 sshd[4889]: Connection closed by 68.220.241.50 port 52374 Mar 6 00:58:45.110933 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:45.119525 systemd[1]: sshd@15-172.31.16.50:22-68.220.241.50:52374.service: Deactivated successfully. Mar 6 00:58:45.125231 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 00:58:45.127561 systemd-logind[1985]: Session 16 logged out. Waiting for processes to exit. Mar 6 00:58:45.131282 systemd-logind[1985]: Removed session 16. Mar 6 00:58:50.204458 systemd[1]: Started sshd@16-172.31.16.50:22-68.220.241.50:52382.service - OpenSSH per-connection server daemon (68.220.241.50:52382). Mar 6 00:58:50.682968 sshd[4901]: Accepted publickey for core from 68.220.241.50 port 52382 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:50.685002 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:50.694286 systemd-logind[1985]: New session 17 of user core. Mar 6 00:58:50.702194 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 00:58:51.046902 sshd[4904]: Connection closed by 68.220.241.50 port 52382 Mar 6 00:58:51.049053 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:51.057101 systemd[1]: sshd@16-172.31.16.50:22-68.220.241.50:52382.service: Deactivated successfully. Mar 6 00:58:51.064645 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 00:58:51.067179 systemd-logind[1985]: Session 17 logged out. Waiting for processes to exit. Mar 6 00:58:51.070634 systemd-logind[1985]: Removed session 17. Mar 6 00:58:51.143563 systemd[1]: Started sshd@17-172.31.16.50:22-68.220.241.50:52384.service - OpenSSH per-connection server daemon (68.220.241.50:52384). Mar 6 00:58:51.623596 sshd[4916]: Accepted publickey for core from 68.220.241.50 port 52384 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:51.625946 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:51.633935 systemd-logind[1985]: New session 18 of user core. Mar 6 00:58:51.642134 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 00:58:52.070664 sshd[4919]: Connection closed by 68.220.241.50 port 52384 Mar 6 00:58:52.070536 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:52.079424 systemd[1]: sshd@17-172.31.16.50:22-68.220.241.50:52384.service: Deactivated successfully. Mar 6 00:58:52.080034 systemd-logind[1985]: Session 18 logged out. Waiting for processes to exit. Mar 6 00:58:52.085130 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 00:58:52.091648 systemd-logind[1985]: Removed session 18. Mar 6 00:58:52.164759 systemd[1]: Started sshd@18-172.31.16.50:22-68.220.241.50:38068.service - OpenSSH per-connection server daemon (68.220.241.50:38068). Mar 6 00:58:52.630885 sshd[4929]: Accepted publickey for core from 68.220.241.50 port 38068 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:52.632566 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:52.640871 systemd-logind[1985]: New session 19 of user core. Mar 6 00:58:52.657389 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 00:58:53.691946 sshd[4932]: Connection closed by 68.220.241.50 port 38068 Mar 6 00:58:53.693166 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:53.701364 systemd[1]: sshd@18-172.31.16.50:22-68.220.241.50:38068.service: Deactivated successfully. Mar 6 00:58:53.706977 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 00:58:53.708991 systemd-logind[1985]: Session 19 logged out. Waiting for processes to exit. Mar 6 00:58:53.712373 systemd-logind[1985]: Removed session 19. Mar 6 00:58:53.787966 systemd[1]: Started sshd@19-172.31.16.50:22-68.220.241.50:38074.service - OpenSSH per-connection server daemon (68.220.241.50:38074). Mar 6 00:58:54.262993 sshd[4948]: Accepted publickey for core from 68.220.241.50 port 38074 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:54.265462 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:54.273625 systemd-logind[1985]: New session 20 of user core. Mar 6 00:58:54.282143 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 00:58:54.880107 sshd[4953]: Connection closed by 68.220.241.50 port 38074 Mar 6 00:58:54.880613 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:54.889224 systemd[1]: sshd@19-172.31.16.50:22-68.220.241.50:38074.service: Deactivated successfully. Mar 6 00:58:54.894190 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 00:58:54.897006 systemd-logind[1985]: Session 20 logged out. Waiting for processes to exit. Mar 6 00:58:54.900601 systemd-logind[1985]: Removed session 20. Mar 6 00:58:54.978438 systemd[1]: Started sshd@20-172.31.16.50:22-68.220.241.50:38090.service - OpenSSH per-connection server daemon (68.220.241.50:38090). Mar 6 00:58:55.441914 sshd[4965]: Accepted publickey for core from 68.220.241.50 port 38090 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:55.443552 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:55.452946 systemd-logind[1985]: New session 21 of user core. Mar 6 00:58:55.465278 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 00:58:55.805093 sshd[4970]: Connection closed by 68.220.241.50 port 38090 Mar 6 00:58:55.804979 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:55.813467 systemd[1]: sshd@20-172.31.16.50:22-68.220.241.50:38090.service: Deactivated successfully. Mar 6 00:58:55.819007 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 00:58:55.822895 systemd-logind[1985]: Session 21 logged out. Waiting for processes to exit. Mar 6 00:58:55.825985 systemd-logind[1985]: Removed session 21. Mar 6 00:59:00.907626 systemd[1]: Started sshd@21-172.31.16.50:22-68.220.241.50:38094.service - OpenSSH per-connection server daemon (68.220.241.50:38094). Mar 6 00:59:01.393180 sshd[4985]: Accepted publickey for core from 68.220.241.50 port 38094 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:01.396588 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:01.406351 systemd-logind[1985]: New session 22 of user core. Mar 6 00:59:01.415203 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 00:59:01.774618 sshd[4988]: Connection closed by 68.220.241.50 port 38094 Mar 6 00:59:01.775891 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:01.784680 systemd[1]: sshd@21-172.31.16.50:22-68.220.241.50:38094.service: Deactivated successfully. Mar 6 00:59:01.791501 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 00:59:01.794220 systemd-logind[1985]: Session 22 logged out. Waiting for processes to exit. Mar 6 00:59:01.798152 systemd-logind[1985]: Removed session 22. Mar 6 00:59:06.875461 systemd[1]: Started sshd@22-172.31.16.50:22-68.220.241.50:46266.service - OpenSSH per-connection server daemon (68.220.241.50:46266). Mar 6 00:59:07.345103 sshd[5000]: Accepted publickey for core from 68.220.241.50 port 46266 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:07.347504 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:07.359240 systemd-logind[1985]: New session 23 of user core. Mar 6 00:59:07.363297 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 00:59:07.714520 sshd[5003]: Connection closed by 68.220.241.50 port 46266 Mar 6 00:59:07.715374 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:07.724087 systemd-logind[1985]: Session 23 logged out. Waiting for processes to exit. Mar 6 00:59:07.725713 systemd[1]: sshd@22-172.31.16.50:22-68.220.241.50:46266.service: Deactivated successfully. Mar 6 00:59:07.729254 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 00:59:07.735762 systemd-logind[1985]: Removed session 23. Mar 6 00:59:12.808697 systemd[1]: Started sshd@23-172.31.16.50:22-68.220.241.50:44872.service - OpenSSH per-connection server daemon (68.220.241.50:44872). Mar 6 00:59:13.276510 sshd[5015]: Accepted publickey for core from 68.220.241.50 port 44872 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:13.278693 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:13.286646 systemd-logind[1985]: New session 24 of user core. Mar 6 00:59:13.292099 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 6 00:59:13.632022 sshd[5018]: Connection closed by 68.220.241.50 port 44872 Mar 6 00:59:13.632882 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:13.640812 systemd[1]: sshd@23-172.31.16.50:22-68.220.241.50:44872.service: Deactivated successfully. Mar 6 00:59:13.645308 systemd[1]: session-24.scope: Deactivated successfully. Mar 6 00:59:13.647367 systemd-logind[1985]: Session 24 logged out. Waiting for processes to exit. Mar 6 00:59:13.650593 systemd-logind[1985]: Removed session 24. Mar 6 00:59:13.726676 systemd[1]: Started sshd@24-172.31.16.50:22-68.220.241.50:44884.service - OpenSSH per-connection server daemon (68.220.241.50:44884). Mar 6 00:59:14.199598 sshd[5030]: Accepted publickey for core from 68.220.241.50 port 44884 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:14.202661 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:14.212548 systemd-logind[1985]: New session 25 of user core. Mar 6 00:59:14.217167 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 6 00:59:17.902896 containerd[2014]: time="2026-03-06T00:59:17.902139200Z" level=info msg="StopContainer for \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" with timeout 30 (s)" Mar 6 00:59:17.907819 containerd[2014]: time="2026-03-06T00:59:17.907738484Z" level=info msg="Stop container \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" with signal terminated" Mar 6 00:59:17.976132 containerd[2014]: time="2026-03-06T00:59:17.975581420Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 00:59:17.991515 systemd[1]: cri-containerd-eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4.scope: Deactivated successfully. Mar 6 00:59:18.003730 containerd[2014]: time="2026-03-06T00:59:18.003653968Z" level=info msg="StopContainer for \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" with timeout 2 (s)" Mar 6 00:59:18.007962 containerd[2014]: time="2026-03-06T00:59:18.007587904Z" level=info msg="Stop container \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" with signal terminated" Mar 6 00:59:18.011008 containerd[2014]: time="2026-03-06T00:59:18.010937776Z" level=info msg="received container exit event container_id:\"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" id:\"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" pid:4020 exited_at:{seconds:1772758758 nanos:8502652}" Mar 6 00:59:18.054317 systemd-networkd[1895]: lxc_health: Link DOWN Mar 6 00:59:18.054330 systemd-networkd[1895]: lxc_health: Lost carrier Mar 6 00:59:18.079566 systemd[1]: cri-containerd-181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43.scope: Deactivated successfully. Mar 6 00:59:18.082048 systemd[1]: cri-containerd-181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43.scope: Consumed 16.056s CPU time, 125M memory peak, 128K read from disk, 12.9M written to disk. Mar 6 00:59:18.094305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4-rootfs.mount: Deactivated successfully. Mar 6 00:59:18.095506 containerd[2014]: time="2026-03-06T00:59:18.095413277Z" level=info msg="received container exit event container_id:\"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" id:\"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" pid:4090 exited_at:{seconds:1772758758 nanos:94921313}" Mar 6 00:59:18.128219 containerd[2014]: time="2026-03-06T00:59:18.128114873Z" level=info msg="StopContainer for \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" returns successfully" Mar 6 00:59:18.131177 containerd[2014]: time="2026-03-06T00:59:18.131115269Z" level=info msg="StopPodSandbox for \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\"" Mar 6 00:59:18.131324 containerd[2014]: time="2026-03-06T00:59:18.131222057Z" level=info msg="Container to stop \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 00:59:18.145892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43-rootfs.mount: Deactivated successfully. Mar 6 00:59:18.162983 containerd[2014]: time="2026-03-06T00:59:18.161560901Z" level=info msg="StopContainer for \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" returns successfully" Mar 6 00:59:18.162231 systemd[1]: cri-containerd-c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015.scope: Deactivated successfully. Mar 6 00:59:18.166058 containerd[2014]: time="2026-03-06T00:59:18.164708537Z" level=info msg="StopPodSandbox for \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\"" Mar 6 00:59:18.166058 containerd[2014]: time="2026-03-06T00:59:18.164854133Z" level=info msg="Container to stop \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 00:59:18.166058 containerd[2014]: time="2026-03-06T00:59:18.165135209Z" level=info msg="Container to stop \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 00:59:18.166058 containerd[2014]: time="2026-03-06T00:59:18.165278573Z" level=info msg="Container to stop \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 00:59:18.166058 containerd[2014]: time="2026-03-06T00:59:18.165318257Z" level=info msg="Container to stop \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 00:59:18.166058 containerd[2014]: time="2026-03-06T00:59:18.165342785Z" level=info msg="Container to stop \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 00:59:18.168528 containerd[2014]: time="2026-03-06T00:59:18.168444065Z" level=info msg="received sandbox exit event container_id:\"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" id:\"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" exit_status:137 exited_at:{seconds:1772758758 nanos:167974769}" monitor_name=podsandbox Mar 6 00:59:18.187390 systemd[1]: cri-containerd-220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a.scope: Deactivated successfully. Mar 6 00:59:18.196130 containerd[2014]: time="2026-03-06T00:59:18.196023449Z" level=info msg="received sandbox exit event container_id:\"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" id:\"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" exit_status:137 exited_at:{seconds:1772758758 nanos:194882525}" monitor_name=podsandbox Mar 6 00:59:18.230724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015-rootfs.mount: Deactivated successfully. Mar 6 00:59:18.247949 containerd[2014]: time="2026-03-06T00:59:18.245087285Z" level=info msg="shim disconnected" id=c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015 namespace=k8s.io Mar 6 00:59:18.247949 containerd[2014]: time="2026-03-06T00:59:18.245147093Z" level=warning msg="cleaning up after shim disconnected" id=c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015 namespace=k8s.io Mar 6 00:59:18.247949 containerd[2014]: time="2026-03-06T00:59:18.245199101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 00:59:18.266688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a-rootfs.mount: Deactivated successfully. Mar 6 00:59:18.273866 containerd[2014]: time="2026-03-06T00:59:18.273771762Z" level=info msg="shim disconnected" id=220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a namespace=k8s.io Mar 6 00:59:18.274118 containerd[2014]: time="2026-03-06T00:59:18.273851634Z" level=warning msg="cleaning up after shim disconnected" id=220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a namespace=k8s.io Mar 6 00:59:18.274118 containerd[2014]: time="2026-03-06T00:59:18.273933246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 00:59:18.286801 containerd[2014]: time="2026-03-06T00:59:18.286742286Z" level=info msg="received sandbox container exit event sandbox_id:\"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" exit_status:137 exited_at:{seconds:1772758758 nanos:167974769}" monitor_name=criService Mar 6 00:59:18.289857 containerd[2014]: time="2026-03-06T00:59:18.288593430Z" level=info msg="TearDown network for sandbox \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" successfully" Mar 6 00:59:18.289857 containerd[2014]: time="2026-03-06T00:59:18.288647154Z" level=info msg="StopPodSandbox for \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" returns successfully" Mar 6 00:59:18.291741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015-shm.mount: Deactivated successfully. Mar 6 00:59:18.313381 containerd[2014]: time="2026-03-06T00:59:18.313234758Z" level=info msg="received sandbox container exit event sandbox_id:\"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" exit_status:137 exited_at:{seconds:1772758758 nanos:194882525}" monitor_name=criService Mar 6 00:59:18.314330 containerd[2014]: time="2026-03-06T00:59:18.314217858Z" level=info msg="TearDown network for sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" successfully" Mar 6 00:59:18.314330 containerd[2014]: time="2026-03-06T00:59:18.314270838Z" level=info msg="StopPodSandbox for \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" returns successfully" Mar 6 00:59:18.470088 kubelet[3452]: I0306 00:59:18.470018 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-hubble-tls\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.470088 kubelet[3452]: I0306 00:59:18.470089 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-hostproc\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.470738 kubelet[3452]: I0306 00:59:18.470130 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-cgroup\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.470738 kubelet[3452]: I0306 00:59:18.470172 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-config-path\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.470738 kubelet[3452]: I0306 00:59:18.470212 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-kernel\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.470738 kubelet[3452]: I0306 00:59:18.470253 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnrlx\" (UniqueName: \"kubernetes.io/projected/7daf92f4-fee5-4a78-9965-479ba34a0e1c-kube-api-access-wnrlx\") pod \"7daf92f4-fee5-4a78-9965-479ba34a0e1c\" (UID: \"7daf92f4-fee5-4a78-9965-479ba34a0e1c\") " Mar 6 00:59:18.470738 kubelet[3452]: I0306 00:59:18.470286 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-lib-modules\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.470738 kubelet[3452]: I0306 00:59:18.470328 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzmpr\" (UniqueName: \"kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-kube-api-access-fzmpr\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471120 kubelet[3452]: I0306 00:59:18.470365 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7daf92f4-fee5-4a78-9965-479ba34a0e1c-cilium-config-path\") pod \"7daf92f4-fee5-4a78-9965-479ba34a0e1c\" (UID: \"7daf92f4-fee5-4a78-9965-479ba34a0e1c\") " Mar 6 00:59:18.471120 kubelet[3452]: I0306 00:59:18.470406 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c7e02fc-17fa-467d-90bb-b07ba1446476-clustermesh-secrets\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471120 kubelet[3452]: I0306 00:59:18.470439 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-net\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471120 kubelet[3452]: I0306 00:59:18.470472 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cni-path\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471120 kubelet[3452]: I0306 00:59:18.470505 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-etc-cni-netd\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471120 kubelet[3452]: I0306 00:59:18.470541 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-bpf-maps\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471422 kubelet[3452]: I0306 00:59:18.470573 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-run\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471422 kubelet[3452]: I0306 00:59:18.470607 3452 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-xtables-lock\") pod \"3c7e02fc-17fa-467d-90bb-b07ba1446476\" (UID: \"3c7e02fc-17fa-467d-90bb-b07ba1446476\") " Mar 6 00:59:18.471422 kubelet[3452]: I0306 00:59:18.470713 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.474290 kubelet[3452]: I0306 00:59:18.474119 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.474290 kubelet[3452]: I0306 00:59:18.474213 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.477610 kubelet[3452]: I0306 00:59:18.477554 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.479862 kubelet[3452]: I0306 00:59:18.479500 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.479862 kubelet[3452]: I0306 00:59:18.479685 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 00:59:18.480417 kubelet[3452]: I0306 00:59:18.480368 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-kube-api-access-fzmpr" (OuterVolumeSpecName: "kube-api-access-fzmpr") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "kube-api-access-fzmpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 00:59:18.480658 kubelet[3452]: I0306 00:59:18.480629 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.481029 kubelet[3452]: I0306 00:59:18.480813 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.481029 kubelet[3452]: I0306 00:59:18.480885 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.481029 kubelet[3452]: I0306 00:59:18.480925 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.481029 kubelet[3452]: I0306 00:59:18.480964 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 6 00:59:18.483159 kubelet[3452]: I0306 00:59:18.483038 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7e02fc-17fa-467d-90bb-b07ba1446476-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 00:59:18.488366 kubelet[3452]: I0306 00:59:18.488242 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c7e02fc-17fa-467d-90bb-b07ba1446476" (UID: "3c7e02fc-17fa-467d-90bb-b07ba1446476"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 00:59:18.488366 kubelet[3452]: I0306 00:59:18.488314 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7daf92f4-fee5-4a78-9965-479ba34a0e1c-kube-api-access-wnrlx" (OuterVolumeSpecName: "kube-api-access-wnrlx") pod "7daf92f4-fee5-4a78-9965-479ba34a0e1c" (UID: "7daf92f4-fee5-4a78-9965-479ba34a0e1c"). InnerVolumeSpecName "kube-api-access-wnrlx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 00:59:18.489676 kubelet[3452]: I0306 00:59:18.489625 3452 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7daf92f4-fee5-4a78-9965-479ba34a0e1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7daf92f4-fee5-4a78-9965-479ba34a0e1c" (UID: "7daf92f4-fee5-4a78-9965-479ba34a0e1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571377 3452 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-bpf-maps\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571428 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-run\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571449 3452 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-xtables-lock\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571469 3452 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-hubble-tls\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571509 3452 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-hostproc\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571531 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-cgroup\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571552 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c7e02fc-17fa-467d-90bb-b07ba1446476-cilium-config-path\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.571766 kubelet[3452]: I0306 00:59:18.571575 3452 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-kernel\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572413 kubelet[3452]: I0306 00:59:18.571597 3452 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wnrlx\" (UniqueName: \"kubernetes.io/projected/7daf92f4-fee5-4a78-9965-479ba34a0e1c-kube-api-access-wnrlx\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572413 kubelet[3452]: I0306 00:59:18.571620 3452 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-lib-modules\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572413 kubelet[3452]: I0306 00:59:18.571645 3452 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fzmpr\" (UniqueName: \"kubernetes.io/projected/3c7e02fc-17fa-467d-90bb-b07ba1446476-kube-api-access-fzmpr\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572413 kubelet[3452]: I0306 00:59:18.571666 3452 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7daf92f4-fee5-4a78-9965-479ba34a0e1c-cilium-config-path\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572413 kubelet[3452]: I0306 00:59:18.571687 3452 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c7e02fc-17fa-467d-90bb-b07ba1446476-clustermesh-secrets\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572413 kubelet[3452]: I0306 00:59:18.571707 3452 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-host-proc-sys-net\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572413 kubelet[3452]: I0306 00:59:18.571726 3452 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-cni-path\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.572760 kubelet[3452]: I0306 00:59:18.572515 3452 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c7e02fc-17fa-467d-90bb-b07ba1446476-etc-cni-netd\") on node \"ip-172-31-16-50\" DevicePath \"\"" Mar 6 00:59:18.758414 kubelet[3452]: I0306 00:59:18.756678 3452 scope.go:117] "RemoveContainer" containerID="eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4" Mar 6 00:59:18.762853 containerd[2014]: time="2026-03-06T00:59:18.762781004Z" level=info msg="RemoveContainer for \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\"" Mar 6 00:59:18.772924 containerd[2014]: time="2026-03-06T00:59:18.772802960Z" level=info msg="RemoveContainer for \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" returns successfully" Mar 6 00:59:18.773855 kubelet[3452]: I0306 00:59:18.773441 3452 scope.go:117] "RemoveContainer" containerID="eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4" Mar 6 00:59:18.774006 containerd[2014]: time="2026-03-06T00:59:18.773924312Z" level=error msg="ContainerStatus for \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\": not found" Mar 6 00:59:18.774554 kubelet[3452]: E0306 00:59:18.774487 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\": not found" containerID="eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4" Mar 6 00:59:18.774663 kubelet[3452]: I0306 00:59:18.774551 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4"} err="failed to get container status \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"eeaa98d388d4cf836424370439fcd6e785aa706f7efea553c91429ee24ab1fc4\": not found" Mar 6 00:59:18.778964 systemd[1]: Removed slice kubepods-besteffort-pod7daf92f4_fee5_4a78_9965_479ba34a0e1c.slice - libcontainer container kubepods-besteffort-pod7daf92f4_fee5_4a78_9965_479ba34a0e1c.slice. Mar 6 00:59:18.781352 kubelet[3452]: I0306 00:59:18.781069 3452 scope.go:117] "RemoveContainer" containerID="181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43" Mar 6 00:59:18.796547 containerd[2014]: time="2026-03-06T00:59:18.796469552Z" level=info msg="RemoveContainer for \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\"" Mar 6 00:59:18.810444 systemd[1]: Removed slice kubepods-burstable-pod3c7e02fc_17fa_467d_90bb_b07ba1446476.slice - libcontainer container kubepods-burstable-pod3c7e02fc_17fa_467d_90bb_b07ba1446476.slice. Mar 6 00:59:18.812922 containerd[2014]: time="2026-03-06T00:59:18.811424840Z" level=info msg="RemoveContainer for \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" returns successfully" Mar 6 00:59:18.810699 systemd[1]: kubepods-burstable-pod3c7e02fc_17fa_467d_90bb_b07ba1446476.slice: Consumed 16.294s CPU time, 125.5M memory peak, 128K read from disk, 16.1M written to disk. Mar 6 00:59:18.818611 kubelet[3452]: I0306 00:59:18.818552 3452 scope.go:117] "RemoveContainer" containerID="368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec" Mar 6 00:59:18.838206 containerd[2014]: time="2026-03-06T00:59:18.838142876Z" level=info msg="RemoveContainer for \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\"" Mar 6 00:59:18.850368 containerd[2014]: time="2026-03-06T00:59:18.850295000Z" level=info msg="RemoveContainer for \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\" returns successfully" Mar 6 00:59:18.850680 kubelet[3452]: I0306 00:59:18.850635 3452 scope.go:117] "RemoveContainer" containerID="8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59" Mar 6 00:59:18.859206 containerd[2014]: time="2026-03-06T00:59:18.858637208Z" level=info msg="RemoveContainer for \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\"" Mar 6 00:59:18.869542 containerd[2014]: time="2026-03-06T00:59:18.869426384Z" level=info msg="RemoveContainer for \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\" returns successfully" Mar 6 00:59:18.870897 kubelet[3452]: I0306 00:59:18.870031 3452 scope.go:117] "RemoveContainer" containerID="f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c" Mar 6 00:59:18.873497 containerd[2014]: time="2026-03-06T00:59:18.873433365Z" level=info msg="RemoveContainer for \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\"" Mar 6 00:59:18.881012 containerd[2014]: time="2026-03-06T00:59:18.880931517Z" level=info msg="RemoveContainer for \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\" returns successfully" Mar 6 00:59:18.881478 kubelet[3452]: I0306 00:59:18.881412 3452 scope.go:117] "RemoveContainer" containerID="1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea" Mar 6 00:59:18.885455 containerd[2014]: time="2026-03-06T00:59:18.885215205Z" level=info msg="RemoveContainer for \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\"" Mar 6 00:59:18.892225 containerd[2014]: time="2026-03-06T00:59:18.892141149Z" level=info msg="RemoveContainer for \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\" returns successfully" Mar 6 00:59:18.892690 kubelet[3452]: I0306 00:59:18.892648 3452 scope.go:117] "RemoveContainer" containerID="181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43" Mar 6 00:59:18.893511 containerd[2014]: time="2026-03-06T00:59:18.893456745Z" level=error msg="ContainerStatus for \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\": not found" Mar 6 00:59:18.894141 kubelet[3452]: E0306 00:59:18.894075 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\": not found" containerID="181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43" Mar 6 00:59:18.894448 kubelet[3452]: I0306 00:59:18.894312 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43"} err="failed to get container status \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\": rpc error: code = NotFound desc = an error occurred when try to find container \"181d27f274f0a563052efddd3013699c48129ed4712748046f40f7e98212ff43\": not found" Mar 6 00:59:18.894448 kubelet[3452]: I0306 00:59:18.894384 3452 scope.go:117] "RemoveContainer" containerID="368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec" Mar 6 00:59:18.895085 containerd[2014]: time="2026-03-06T00:59:18.895003041Z" level=error msg="ContainerStatus for \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\": not found" Mar 6 00:59:18.895943 kubelet[3452]: E0306 00:59:18.895785 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\": not found" containerID="368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec" Mar 6 00:59:18.896396 kubelet[3452]: I0306 00:59:18.896094 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec"} err="failed to get container status \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"368aa911634720d9b48da70755e269ff7abb485df092c3863c823d74f85355ec\": not found" Mar 6 00:59:18.896396 kubelet[3452]: I0306 00:59:18.896325 3452 scope.go:117] "RemoveContainer" containerID="8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59" Mar 6 00:59:18.897173 containerd[2014]: time="2026-03-06T00:59:18.896990097Z" level=error msg="ContainerStatus for \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\": not found" Mar 6 00:59:18.897698 kubelet[3452]: E0306 00:59:18.897644 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\": not found" containerID="8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59" Mar 6 00:59:18.897804 kubelet[3452]: I0306 00:59:18.897700 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59"} err="failed to get container status \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b64ebc9f2e36e3a9fd07f8ad8b53cc08cc9d27cb150e1fe5e63853426146d59\": not found" Mar 6 00:59:18.897804 kubelet[3452]: I0306 00:59:18.897737 3452 scope.go:117] "RemoveContainer" containerID="f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c" Mar 6 00:59:18.898716 containerd[2014]: time="2026-03-06T00:59:18.898655841Z" level=error msg="ContainerStatus for \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\": not found" Mar 6 00:59:18.899392 kubelet[3452]: E0306 00:59:18.899146 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\": not found" containerID="f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c" Mar 6 00:59:18.899392 kubelet[3452]: I0306 00:59:18.899213 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c"} err="failed to get container status \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3e18408ff5a8c5fc5c38a1604b5c9dbc14980108521ca89f96eb8565138241c\": not found" Mar 6 00:59:18.899392 kubelet[3452]: I0306 00:59:18.899249 3452 scope.go:117] "RemoveContainer" containerID="1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea" Mar 6 00:59:18.899856 containerd[2014]: time="2026-03-06T00:59:18.899617809Z" level=error msg="ContainerStatus for \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\": not found" Mar 6 00:59:18.900318 kubelet[3452]: E0306 00:59:18.900207 3452 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\": not found" containerID="1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea" Mar 6 00:59:18.900318 kubelet[3452]: I0306 00:59:18.900282 3452 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea"} err="failed to get container status \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bdade11c868c25af4bd27c7ce049f16f136a873392d7d5e9a4fc79ef0f76fea\": not found" Mar 6 00:59:19.009946 containerd[2014]: time="2026-03-06T00:59:19.009788777Z" level=info msg="StopPodSandbox for \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\"" Mar 6 00:59:19.010775 containerd[2014]: time="2026-03-06T00:59:19.010018481Z" level=info msg="TearDown network for sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" successfully" Mar 6 00:59:19.010775 containerd[2014]: time="2026-03-06T00:59:19.010046105Z" level=info msg="StopPodSandbox for \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" returns successfully" Mar 6 00:59:19.013714 containerd[2014]: time="2026-03-06T00:59:19.013091465Z" level=info msg="RemovePodSandbox for \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\"" Mar 6 00:59:19.013714 containerd[2014]: time="2026-03-06T00:59:19.013181993Z" level=info msg="Forcibly stopping sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\"" Mar 6 00:59:19.013714 containerd[2014]: time="2026-03-06T00:59:19.013381121Z" level=info msg="TearDown network for sandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" successfully" Mar 6 00:59:19.017859 containerd[2014]: time="2026-03-06T00:59:19.017744837Z" level=info msg="Ensure that sandbox 220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a in task-service has been cleanup successfully" Mar 6 00:59:19.026624 containerd[2014]: time="2026-03-06T00:59:19.026547029Z" level=info msg="RemovePodSandbox \"220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a\" returns successfully" Mar 6 00:59:19.027870 containerd[2014]: time="2026-03-06T00:59:19.027544241Z" level=info msg="StopPodSandbox for \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\"" Mar 6 00:59:19.027870 containerd[2014]: time="2026-03-06T00:59:19.027724145Z" level=info msg="TearDown network for sandbox \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" successfully" Mar 6 00:59:19.027870 containerd[2014]: time="2026-03-06T00:59:19.027748997Z" level=info msg="StopPodSandbox for \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" returns successfully" Mar 6 00:59:19.029200 containerd[2014]: time="2026-03-06T00:59:19.029112473Z" level=info msg="RemovePodSandbox for \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\"" Mar 6 00:59:19.029338 containerd[2014]: time="2026-03-06T00:59:19.029233277Z" level=info msg="Forcibly stopping sandbox \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\"" Mar 6 00:59:19.029461 containerd[2014]: time="2026-03-06T00:59:19.029425361Z" level=info msg="TearDown network for sandbox \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" successfully" Mar 6 00:59:19.031915 containerd[2014]: time="2026-03-06T00:59:19.031797665Z" level=info msg="Ensure that sandbox c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015 in task-service has been cleanup successfully" Mar 6 00:59:19.041731 containerd[2014]: time="2026-03-06T00:59:19.041570489Z" level=info msg="RemovePodSandbox \"c286795b920e5251887614376d158a200a6fb01b51554c590c7ca8bfb63a8015\" returns successfully" Mar 6 00:59:19.080851 kubelet[3452]: I0306 00:59:19.080736 3452 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7e02fc-17fa-467d-90bb-b07ba1446476" path="/var/lib/kubelet/pods/3c7e02fc-17fa-467d-90bb-b07ba1446476/volumes" Mar 6 00:59:19.082432 kubelet[3452]: I0306 00:59:19.082394 3452 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7daf92f4-fee5-4a78-9965-479ba34a0e1c" path="/var/lib/kubelet/pods/7daf92f4-fee5-4a78-9965-479ba34a0e1c/volumes" Mar 6 00:59:19.090331 systemd[1]: var-lib-kubelet-pods-7daf92f4\x2dfee5\x2d4a78\x2d9965\x2d479ba34a0e1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwnrlx.mount: Deactivated successfully. Mar 6 00:59:19.090533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-220f9f15df5ea6d68bc46e62548d612b7f12451d9bf762feda04985c328a123a-shm.mount: Deactivated successfully. Mar 6 00:59:19.090668 systemd[1]: var-lib-kubelet-pods-3c7e02fc\x2d17fa\x2d467d\x2d90bb\x2db07ba1446476-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzmpr.mount: Deactivated successfully. Mar 6 00:59:19.090808 systemd[1]: var-lib-kubelet-pods-3c7e02fc\x2d17fa\x2d467d\x2d90bb\x2db07ba1446476-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 6 00:59:19.090965 systemd[1]: var-lib-kubelet-pods-3c7e02fc\x2d17fa\x2d467d\x2d90bb\x2db07ba1446476-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 6 00:59:19.341379 kubelet[3452]: E0306 00:59:19.341119 3452 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 6 00:59:19.873795 sshd[5033]: Connection closed by 68.220.241.50 port 44884 Mar 6 00:59:19.874750 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:19.884109 systemd[1]: sshd@24-172.31.16.50:22-68.220.241.50:44884.service: Deactivated successfully. Mar 6 00:59:19.890741 systemd[1]: session-25.scope: Deactivated successfully. Mar 6 00:59:19.891777 systemd[1]: session-25.scope: Consumed 2.818s CPU time, 25.6M memory peak. Mar 6 00:59:19.893435 systemd-logind[1985]: Session 25 logged out. Waiting for processes to exit. Mar 6 00:59:19.896944 systemd-logind[1985]: Removed session 25. Mar 6 00:59:19.979509 systemd[1]: Started sshd@25-172.31.16.50:22-68.220.241.50:44896.service - OpenSSH per-connection server daemon (68.220.241.50:44896). Mar 6 00:59:20.515988 sshd[5181]: Accepted publickey for core from 68.220.241.50 port 44896 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:20.518431 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:20.526979 systemd-logind[1985]: New session 26 of user core. Mar 6 00:59:20.536100 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 6 00:59:20.635082 ntpd[2211]: Deleting 10 lxc_health, [fe80::b415:b7ff:fef1:4925%8]:123, stats: received=0, sent=0, dropped=0, active_time=91 secs Mar 6 00:59:20.635583 ntpd[2211]: 6 Mar 00:59:20 ntpd[2211]: Deleting 10 lxc_health, [fe80::b415:b7ff:fef1:4925%8]:123, stats: received=0, sent=0, dropped=0, active_time=91 secs Mar 6 00:59:21.058536 kubelet[3452]: E0306 00:59:21.058239 3452 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-wvhgz" podUID="c43975f8-6a2e-40d5-9dbf-3dee054604d4" Mar 6 00:59:22.217590 systemd[1]: Created slice kubepods-burstable-pod91b42e3b_f34a_405c_95e6_b775951e0571.slice - libcontainer container kubepods-burstable-pod91b42e3b_f34a_405c_95e6_b775951e0571.slice. Mar 6 00:59:22.228894 sshd[5184]: Connection closed by 68.220.241.50 port 44896 Mar 6 00:59:22.230399 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:22.241576 systemd[1]: sshd@25-172.31.16.50:22-68.220.241.50:44896.service: Deactivated successfully. Mar 6 00:59:22.248739 systemd[1]: session-26.scope: Deactivated successfully. Mar 6 00:59:22.249520 systemd[1]: session-26.scope: Consumed 1.355s CPU time, 23.6M memory peak. Mar 6 00:59:22.254298 systemd-logind[1985]: Session 26 logged out. Waiting for processes to exit. Mar 6 00:59:22.261958 systemd-logind[1985]: Removed session 26. Mar 6 00:59:22.299696 kubelet[3452]: I0306 00:59:22.299629 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91b42e3b-f34a-405c-95e6-b775951e0571-cilium-config-path\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.301678 kubelet[3452]: I0306 00:59:22.299709 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-cilium-cgroup\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.301678 kubelet[3452]: I0306 00:59:22.299746 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91b42e3b-f34a-405c-95e6-b775951e0571-hubble-tls\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.301678 kubelet[3452]: I0306 00:59:22.299785 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-host-proc-sys-net\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.301678 kubelet[3452]: I0306 00:59:22.300004 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-cni-path\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.301678 kubelet[3452]: I0306 00:59:22.300098 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxsxn\" (UniqueName: \"kubernetes.io/projected/91b42e3b-f34a-405c-95e6-b775951e0571-kube-api-access-dxsxn\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.301678 kubelet[3452]: I0306 00:59:22.300194 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-bpf-maps\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.302236 kubelet[3452]: I0306 00:59:22.300279 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-etc-cni-netd\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.302236 kubelet[3452]: I0306 00:59:22.300374 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-cilium-run\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.302236 kubelet[3452]: I0306 00:59:22.300480 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-xtables-lock\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.302236 kubelet[3452]: I0306 00:59:22.300568 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91b42e3b-f34a-405c-95e6-b775951e0571-cilium-ipsec-secrets\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.302236 kubelet[3452]: I0306 00:59:22.300607 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-host-proc-sys-kernel\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.302236 kubelet[3452]: I0306 00:59:22.300692 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-lib-modules\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.303061 kubelet[3452]: I0306 00:59:22.302597 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91b42e3b-f34a-405c-95e6-b775951e0571-hostproc\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.303061 kubelet[3452]: I0306 00:59:22.302677 3452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91b42e3b-f34a-405c-95e6-b775951e0571-clustermesh-secrets\") pod \"cilium-564nb\" (UID: \"91b42e3b-f34a-405c-95e6-b775951e0571\") " pod="kube-system/cilium-564nb" Mar 6 00:59:22.330674 systemd[1]: Started sshd@26-172.31.16.50:22-68.220.241.50:34664.service - OpenSSH per-connection server daemon (68.220.241.50:34664). Mar 6 00:59:22.462258 kubelet[3452]: I0306 00:59:22.462114 3452 setters.go:543] "Node became not ready" node="ip-172-31-16-50" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-06T00:59:22Z","lastTransitionTime":"2026-03-06T00:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 6 00:59:22.534128 containerd[2014]: time="2026-03-06T00:59:22.533043935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-564nb,Uid:91b42e3b-f34a-405c-95e6-b775951e0571,Namespace:kube-system,Attempt:0,}" Mar 6 00:59:22.575573 containerd[2014]: time="2026-03-06T00:59:22.575514431Z" level=info msg="connecting to shim deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23" address="unix:///run/containerd/s/e80952f644d6edcdb75c7b84038ee077f73bdaea93e237dc5e8ed8048da092eb" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:59:22.626173 systemd[1]: Started cri-containerd-deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23.scope - libcontainer container deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23. Mar 6 00:59:22.682663 containerd[2014]: time="2026-03-06T00:59:22.682561115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-564nb,Uid:91b42e3b-f34a-405c-95e6-b775951e0571,Namespace:kube-system,Attempt:0,} returns sandbox id \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\"" Mar 6 00:59:22.695238 containerd[2014]: time="2026-03-06T00:59:22.695127599Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 6 00:59:22.709139 containerd[2014]: time="2026-03-06T00:59:22.709065900Z" level=info msg="Container a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:22.720283 containerd[2014]: time="2026-03-06T00:59:22.720211812Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002\"" Mar 6 00:59:22.722389 containerd[2014]: time="2026-03-06T00:59:22.721500048Z" level=info msg="StartContainer for \"a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002\"" Mar 6 00:59:22.725597 containerd[2014]: time="2026-03-06T00:59:22.725512164Z" level=info msg="connecting to shim a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002" address="unix:///run/containerd/s/e80952f644d6edcdb75c7b84038ee077f73bdaea93e237dc5e8ed8048da092eb" protocol=ttrpc version=3 Mar 6 00:59:22.760203 systemd[1]: Started cri-containerd-a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002.scope - libcontainer container a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002. Mar 6 00:59:22.826232 sshd[5194]: Accepted publickey for core from 68.220.241.50 port 34664 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:22.830821 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:22.832088 containerd[2014]: time="2026-03-06T00:59:22.831377892Z" level=info msg="StartContainer for \"a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002\" returns successfully" Mar 6 00:59:22.846857 systemd-logind[1985]: New session 27 of user core. Mar 6 00:59:22.853226 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 6 00:59:22.862778 systemd[1]: cri-containerd-a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002.scope: Deactivated successfully. Mar 6 00:59:22.871949 containerd[2014]: time="2026-03-06T00:59:22.871796616Z" level=info msg="received container exit event container_id:\"a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002\" id:\"a1cae07e50d7764a6a8b937cd9d8b1f6819a7dd6305df80c2105594e76665002\" pid:5259 exited_at:{seconds:1772758762 nanos:868788804}" Mar 6 00:59:23.058867 kubelet[3452]: E0306 00:59:23.058221 3452 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-wvhgz" podUID="c43975f8-6a2e-40d5-9dbf-3dee054604d4" Mar 6 00:59:23.068522 sshd[5278]: Connection closed by 68.220.241.50 port 34664 Mar 6 00:59:23.069590 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:23.079279 systemd[1]: sshd@26-172.31.16.50:22-68.220.241.50:34664.service: Deactivated successfully. Mar 6 00:59:23.083500 systemd[1]: session-27.scope: Deactivated successfully. Mar 6 00:59:23.086960 systemd-logind[1985]: Session 27 logged out. Waiting for processes to exit. Mar 6 00:59:23.089855 systemd-logind[1985]: Removed session 27. Mar 6 00:59:23.162725 systemd[1]: Started sshd@27-172.31.16.50:22-68.220.241.50:34670.service - OpenSSH per-connection server daemon (68.220.241.50:34670). Mar 6 00:59:23.631045 sshd[5298]: Accepted publickey for core from 68.220.241.50 port 34670 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:23.634379 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:23.643807 systemd-logind[1985]: New session 28 of user core. Mar 6 00:59:23.649110 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 6 00:59:23.831131 containerd[2014]: time="2026-03-06T00:59:23.830877769Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 6 00:59:23.857658 containerd[2014]: time="2026-03-06T00:59:23.856029889Z" level=info msg="Container 16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:23.869959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884311710.mount: Deactivated successfully. Mar 6 00:59:23.883024 containerd[2014]: time="2026-03-06T00:59:23.881667169Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d\"" Mar 6 00:59:23.886616 containerd[2014]: time="2026-03-06T00:59:23.886317541Z" level=info msg="StartContainer for \"16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d\"" Mar 6 00:59:23.893013 containerd[2014]: time="2026-03-06T00:59:23.892741993Z" level=info msg="connecting to shim 16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d" address="unix:///run/containerd/s/e80952f644d6edcdb75c7b84038ee077f73bdaea93e237dc5e8ed8048da092eb" protocol=ttrpc version=3 Mar 6 00:59:23.959595 systemd[1]: Started cri-containerd-16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d.scope - libcontainer container 16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d. Mar 6 00:59:24.077866 containerd[2014]: time="2026-03-06T00:59:24.075216082Z" level=info msg="StartContainer for \"16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d\" returns successfully" Mar 6 00:59:24.096067 systemd[1]: cri-containerd-16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d.scope: Deactivated successfully. Mar 6 00:59:24.100518 containerd[2014]: time="2026-03-06T00:59:24.099900334Z" level=info msg="received container exit event container_id:\"16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d\" id:\"16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d\" pid:5321 exited_at:{seconds:1772758764 nanos:99321778}" Mar 6 00:59:24.138523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16c210a7338fcbf4874033374821179940fe593e64ad1f8c6a249271e975ed3d-rootfs.mount: Deactivated successfully. Mar 6 00:59:24.343155 kubelet[3452]: E0306 00:59:24.343074 3452 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 6 00:59:24.840915 containerd[2014]: time="2026-03-06T00:59:24.840660590Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 6 00:59:24.865568 containerd[2014]: time="2026-03-06T00:59:24.863108630Z" level=info msg="Container 4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:24.888677 containerd[2014]: time="2026-03-06T00:59:24.888587798Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3\"" Mar 6 00:59:24.891864 containerd[2014]: time="2026-03-06T00:59:24.891790514Z" level=info msg="StartContainer for \"4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3\"" Mar 6 00:59:24.897227 containerd[2014]: time="2026-03-06T00:59:24.897149570Z" level=info msg="connecting to shim 4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3" address="unix:///run/containerd/s/e80952f644d6edcdb75c7b84038ee077f73bdaea93e237dc5e8ed8048da092eb" protocol=ttrpc version=3 Mar 6 00:59:24.968592 systemd[1]: Started cri-containerd-4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3.scope - libcontainer container 4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3. Mar 6 00:59:25.058767 kubelet[3452]: E0306 00:59:25.058380 3452 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-wvhgz" podUID="c43975f8-6a2e-40d5-9dbf-3dee054604d4" Mar 6 00:59:25.102374 systemd[1]: cri-containerd-4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3.scope: Deactivated successfully. Mar 6 00:59:25.102964 containerd[2014]: time="2026-03-06T00:59:25.102350339Z" level=info msg="StartContainer for \"4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3\" returns successfully" Mar 6 00:59:25.110223 containerd[2014]: time="2026-03-06T00:59:25.110161019Z" level=info msg="received container exit event container_id:\"4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3\" id:\"4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3\" pid:5364 exited_at:{seconds:1772758765 nanos:109789895}" Mar 6 00:59:25.155538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4af60c93046b21b9a03f662b6f9f8842de2a162c3d2ef265c75b52925cd859e3-rootfs.mount: Deactivated successfully. Mar 6 00:59:25.852255 containerd[2014]: time="2026-03-06T00:59:25.852159447Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 6 00:59:25.872347 containerd[2014]: time="2026-03-06T00:59:25.871116807Z" level=info msg="Container ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:25.893480 containerd[2014]: time="2026-03-06T00:59:25.893388639Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be\"" Mar 6 00:59:25.895540 containerd[2014]: time="2026-03-06T00:59:25.895439679Z" level=info msg="StartContainer for \"ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be\"" Mar 6 00:59:25.898975 containerd[2014]: time="2026-03-06T00:59:25.898902363Z" level=info msg="connecting to shim ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be" address="unix:///run/containerd/s/e80952f644d6edcdb75c7b84038ee077f73bdaea93e237dc5e8ed8048da092eb" protocol=ttrpc version=3 Mar 6 00:59:25.941156 systemd[1]: Started cri-containerd-ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be.scope - libcontainer container ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be. Mar 6 00:59:26.012325 systemd[1]: cri-containerd-ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be.scope: Deactivated successfully. Mar 6 00:59:26.017520 containerd[2014]: time="2026-03-06T00:59:26.017156544Z" level=info msg="received container exit event container_id:\"ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be\" id:\"ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be\" pid:5407 exited_at:{seconds:1772758766 nanos:15509940}" Mar 6 00:59:26.033073 containerd[2014]: time="2026-03-06T00:59:26.032994780Z" level=info msg="StartContainer for \"ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be\" returns successfully" Mar 6 00:59:26.064645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccdf1235f7bdb718454ff5beb1672b8532c01d8ab6170d2fe4817acadc2149be-rootfs.mount: Deactivated successfully. Mar 6 00:59:26.865267 containerd[2014]: time="2026-03-06T00:59:26.864770188Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 6 00:59:26.903948 containerd[2014]: time="2026-03-06T00:59:26.903810940Z" level=info msg="Container 7a7c15e9261cd83b09991eaa795d50b9feae385b7f1d4c581be15c2e46dc449a: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:26.924855 containerd[2014]: time="2026-03-06T00:59:26.924775337Z" level=info msg="CreateContainer within sandbox \"deee87ae567c83f6e2d82c12ed7b07c2f5166c70fefaee77fd46355dede08a23\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a7c15e9261cd83b09991eaa795d50b9feae385b7f1d4c581be15c2e46dc449a\"" Mar 6 00:59:26.926178 containerd[2014]: time="2026-03-06T00:59:26.926115641Z" level=info msg="StartContainer for \"7a7c15e9261cd83b09991eaa795d50b9feae385b7f1d4c581be15c2e46dc449a\"" Mar 6 00:59:26.929537 containerd[2014]: time="2026-03-06T00:59:26.929420753Z" level=info msg="connecting to shim 7a7c15e9261cd83b09991eaa795d50b9feae385b7f1d4c581be15c2e46dc449a" address="unix:///run/containerd/s/e80952f644d6edcdb75c7b84038ee077f73bdaea93e237dc5e8ed8048da092eb" protocol=ttrpc version=3 Mar 6 00:59:26.977166 systemd[1]: Started cri-containerd-7a7c15e9261cd83b09991eaa795d50b9feae385b7f1d4c581be15c2e46dc449a.scope - libcontainer container 7a7c15e9261cd83b09991eaa795d50b9feae385b7f1d4c581be15c2e46dc449a. Mar 6 00:59:27.058209 kubelet[3452]: E0306 00:59:27.057808 3452 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-wvhgz" podUID="c43975f8-6a2e-40d5-9dbf-3dee054604d4" Mar 6 00:59:27.084568 containerd[2014]: time="2026-03-06T00:59:27.084401125Z" level=info msg="StartContainer for \"7a7c15e9261cd83b09991eaa795d50b9feae385b7f1d4c581be15c2e46dc449a\" returns successfully" Mar 6 00:59:28.072985 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 6 00:59:29.058204 kubelet[3452]: E0306 00:59:29.058100 3452 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-wvhgz" podUID="c43975f8-6a2e-40d5-9dbf-3dee054604d4" Mar 6 00:59:32.593101 systemd-networkd[1895]: lxc_health: Link UP Mar 6 00:59:32.599540 (udev-worker)[5987]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:59:32.613637 systemd-networkd[1895]: lxc_health: Gained carrier Mar 6 00:59:34.544139 systemd-networkd[1895]: lxc_health: Gained IPv6LL Mar 6 00:59:34.572162 kubelet[3452]: I0306 00:59:34.572066 3452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-564nb" podStartSLOduration=12.572043742 podStartE2EDuration="12.572043742s" podCreationTimestamp="2026-03-06 00:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:59:27.947320266 +0000 UTC m=+129.341998484" watchObservedRunningTime="2026-03-06 00:59:34.572043742 +0000 UTC m=+135.966721960" Mar 6 00:59:36.635139 ntpd[2211]: Listen normally on 13 lxc_health [fe80::883a:a7ff:fe13:24a5%14]:123 Mar 6 00:59:36.635789 ntpd[2211]: 6 Mar 00:59:36 ntpd[2211]: Listen normally on 13 lxc_health [fe80::883a:a7ff:fe13:24a5%14]:123 Mar 6 00:59:37.759593 sshd[5301]: Connection closed by 68.220.241.50 port 34670 Mar 6 00:59:37.760646 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:37.772240 systemd-logind[1985]: Session 28 logged out. Waiting for processes to exit. Mar 6 00:59:37.774567 systemd[1]: sshd@27-172.31.16.50:22-68.220.241.50:34670.service: Deactivated successfully. Mar 6 00:59:37.782214 systemd[1]: session-28.scope: Deactivated successfully. Mar 6 00:59:37.788616 systemd-logind[1985]: Removed session 28. Mar 6 00:59:52.049972 systemd[1]: cri-containerd-0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f.scope: Deactivated successfully. Mar 6 00:59:52.050551 systemd[1]: cri-containerd-0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f.scope: Consumed 5.654s CPU time, 56.1M memory peak. Mar 6 00:59:52.053998 containerd[2014]: time="2026-03-06T00:59:52.052814245Z" level=info msg="received container exit event container_id:\"0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f\" id:\"0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f\" pid:3197 exit_status:1 exited_at:{seconds:1772758792 nanos:52349545}" Mar 6 00:59:52.104042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f-rootfs.mount: Deactivated successfully. Mar 6 00:59:52.110055 kubelet[3452]: E0306 00:59:52.109973 3452 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-50?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 6 00:59:52.973579 kubelet[3452]: I0306 00:59:52.973102 3452 scope.go:117] "RemoveContainer" containerID="0224c2d35352c72362e10297d8c1f70905ee68c7d40d3590bc8ed6dfc136e78f" Mar 6 00:59:52.977016 containerd[2014]: time="2026-03-06T00:59:52.976969902Z" level=info msg="CreateContainer within sandbox \"3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 6 00:59:53.004992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077480682.mount: Deactivated successfully. Mar 6 00:59:53.009884 containerd[2014]: time="2026-03-06T00:59:53.008367758Z" level=info msg="Container 938144324c1ad63830cfaa9f64b5981e5b51ba856936db2fa6eddfd7f3576d62: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:53.027117 containerd[2014]: time="2026-03-06T00:59:53.027062246Z" level=info msg="CreateContainer within sandbox \"3f1100b2f10123e8bebfb6a469fe83ac3d0ad704f3a56b6972aa578a30cdfc1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"938144324c1ad63830cfaa9f64b5981e5b51ba856936db2fa6eddfd7f3576d62\"" Mar 6 00:59:53.028061 containerd[2014]: time="2026-03-06T00:59:53.028009310Z" level=info msg="StartContainer for \"938144324c1ad63830cfaa9f64b5981e5b51ba856936db2fa6eddfd7f3576d62\"" Mar 6 00:59:53.030804 containerd[2014]: time="2026-03-06T00:59:53.030710738Z" level=info msg="connecting to shim 938144324c1ad63830cfaa9f64b5981e5b51ba856936db2fa6eddfd7f3576d62" address="unix:///run/containerd/s/75af760a16a0af2047c4fc3dabc72bc65ac55c152ed66f28c3c55ca8693fca9e" protocol=ttrpc version=3 Mar 6 00:59:53.072185 systemd[1]: Started cri-containerd-938144324c1ad63830cfaa9f64b5981e5b51ba856936db2fa6eddfd7f3576d62.scope - libcontainer container 938144324c1ad63830cfaa9f64b5981e5b51ba856936db2fa6eddfd7f3576d62. Mar 6 00:59:53.155916 containerd[2014]: time="2026-03-06T00:59:53.155850471Z" level=info msg="StartContainer for \"938144324c1ad63830cfaa9f64b5981e5b51ba856936db2fa6eddfd7f3576d62\" returns successfully" Mar 6 00:59:56.677169 systemd[1]: cri-containerd-4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7.scope: Deactivated successfully. Mar 6 00:59:56.679085 systemd[1]: cri-containerd-4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7.scope: Consumed 5.534s CPU time, 21.3M memory peak. Mar 6 00:59:56.684976 containerd[2014]: time="2026-03-06T00:59:56.684885680Z" level=info msg="received container exit event container_id:\"4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7\" id:\"4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7\" pid:3182 exit_status:1 exited_at:{seconds:1772758796 nanos:682070312}" Mar 6 00:59:56.743068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7-rootfs.mount: Deactivated successfully. Mar 6 00:59:57.000513 kubelet[3452]: I0306 00:59:57.000149 3452 scope.go:117] "RemoveContainer" containerID="4b850dc77bafa8a55f724ec3ad2af51a0cc0461e835c67a2691af16798cc7ea7" Mar 6 00:59:57.004551 containerd[2014]: time="2026-03-06T00:59:57.004115034Z" level=info msg="CreateContainer within sandbox \"9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 6 00:59:57.018063 containerd[2014]: time="2026-03-06T00:59:57.018006738Z" level=info msg="Container 248ea27adcc54de0b8af7f4799cfeeaebab74e41b670d7466b531f7b8ecf2142: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:57.031039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452074433.mount: Deactivated successfully. Mar 6 00:59:57.037000 containerd[2014]: time="2026-03-06T00:59:57.036910938Z" level=info msg="CreateContainer within sandbox \"9fecff5dbf36e072e78dcac29c7828bf6cb58de3726db3531d66cd916c9a6736\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"248ea27adcc54de0b8af7f4799cfeeaebab74e41b670d7466b531f7b8ecf2142\"" Mar 6 00:59:57.038107 containerd[2014]: time="2026-03-06T00:59:57.037823262Z" level=info msg="StartContainer for \"248ea27adcc54de0b8af7f4799cfeeaebab74e41b670d7466b531f7b8ecf2142\"" Mar 6 00:59:57.040541 containerd[2014]: time="2026-03-06T00:59:57.040425618Z" level=info msg="connecting to shim 248ea27adcc54de0b8af7f4799cfeeaebab74e41b670d7466b531f7b8ecf2142" address="unix:///run/containerd/s/23b8c25dcfb20e572b0e0185e8c584e909b26d7395fef53f00d20a0435c78e21" protocol=ttrpc version=3 Mar 6 00:59:57.117445 systemd[1]: Started cri-containerd-248ea27adcc54de0b8af7f4799cfeeaebab74e41b670d7466b531f7b8ecf2142.scope - libcontainer container 248ea27adcc54de0b8af7f4799cfeeaebab74e41b670d7466b531f7b8ecf2142. Mar 6 00:59:57.201819 containerd[2014]: time="2026-03-06T00:59:57.201745375Z" level=info msg="StartContainer for \"248ea27adcc54de0b8af7f4799cfeeaebab74e41b670d7466b531f7b8ecf2142\" returns successfully" Mar 6 01:00:02.111776 kubelet[3452]: E0306 01:00:02.111283 3452 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-50?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 6 01:00:12.112506 kubelet[3452]: E0306 01:00:12.112364 3452 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-50?timeout=10s\": context deadline exceeded"